Let’s delve into the world of resolutions and the pixels that they consist of. First what do you know (or understand) what a picture is? I’m sure you’ve been inundated with the term of “1080P”. Or “10 Mega Pixel on the go everyday activity camera” but do you really understand why this is so? Now that I have you questioning yourself, do you wonder if these terms truly mean and represent a higher standard of quality youre hyped to expect? Let’s begin with trying to grasp how our quality comes into play, it’s known as PAR (Pixel aspect ratio). It is a mathematical ratio that describes how the width of a pixel in a digital image compares to the height of that pixel (Very similar to square feet in units of measurements for buildings). There are a number of articles and tutorials online about this subject. To make the matter even worse there is a complicity of the PAR/DAR/SAR terminology but I will try to present the problem as simply as possible, and in regards to display monitors and DVR exporting instead of motion film cameras. After all, were all here to obtain knowledge in our security devices and how to choose wisely.
Most digital imaging systems display an image as a grid of tiny, square pixels. The ratio of the width to the height of an image is known as the aspect ratio, or more precisely the DAR (Display Aspect Ratio). The aspect ratio of the image as displayed, for TV, DAR is traditionally 4:3 (both analog and digital). There are several complicating factors in understanding PAR, particularly as it pertains to digitization of analog video:
Analog video does not have pixels, but rather scan lines, and thus has a well-defined vertical resolution (the lines of the scan) but not a well-defined horizontal resolution, since each line is an analog signal. Because of the over scan, some of the lines at the top and bottom of the image are not visible, as are some of the possible imaging on the left and right. Also, the resolution may be rounded (NTSC uses 480 lines). As well, analog video signals are interlaced – each image (frame) is sent as two “fields”, each with half the lines. Thus pixels are either twice as tall as they would be without interlacing, or the image is deinterlace. So, in a nutshell, aspect ratios allow you to “stretch” videos during playback using that aspect ratio number as a guidance as to how much the video should be stretched. For example, both the NTSC Standard 4:3 and NTSC Widescreen 16:9 resolutions are 720×480. For PAL both are 720×576. What’s different between Standard and Widescreen is the pixel aspect ratio regarding their horizontal size. On NTSC for example, the DV Widescreen aspect ratio is set by the DV consortium as 1.2121. If you take NTSC resolution of 720×480 and you do a 720*1.2121 you will end up with a number that’s 873 (872.712 actually). And that’s the actual rendered resolution of the DV Widescreen on a PC monitor: 873×480. That’s how many pixels of your screen will be used to display your DV footage, even if the actual resolution that was shot by the camera was 720×480. The Europeans (who use PAL) are a bit luckier, they have higher resolutions (720×576) and different aspect ratios.
Are you guys still with me? If so here’s the part you probably came here for (or eagerly waiting for).
There are a TON of Digital Video Recorders out there. A lot still peddling inconsistent qualities demanded by today’s consumers. I think of a lot of them as the old cliché term “wolves in sheep’s clothing”. So many devices advertised in either a false terminology or deceptive venue of capabilities. There is some confusion in the naming of high-quality video resolutions used by several video surveillance systems on the market. This confusion has allowed less-than- scrupulous system designers to fool unsuspecting customers. Here’s what you want to understand while weeding through these false prophets.
Allow me to denounce a specific misconception (these numbers are for NTSC video):
- D1 is not the same as 4CIF
- D1 is an aspect ratio of 720×480 pixels
- 4CIF is 704×480 pixels
- 2DCIF is not the same as 4CIF
- DCIF is 528×320 pixels
- 4CIF is 704×480 pixels
Specifically, DCIF is the same number of pixels (168,960) as 2CIF, but 2CIF is stretched horizontally. Now there is 960H which is a new standard for security cameras and security DVRs that provides high resolution images using advanced image sensors. Security cameras capable of 960H produce an image that is 960 horizontal and 480 vertical pixels large (960×480). OK now, lets review:
- 960H is 960×480 pixels
- D1 is 720×480 pixels
- 4CIF is 704×480 pixels
- DCIF is 528×320 pixels
- 2CIF is 704×240 pixels
- CIF is 352×240 pixels
- QCIF is 176×120 pixels
FPS (Frames Per Second)
Frame rate (also known as frame frequency) is the frequency (rate) at which an imaging device produces consecutive images called frames. The term applies across the board for film, video cameras, computer graphics, and motion capture systems. Frame rate is most often expressed as FPS (frames per second) and is also expressed in progressive scan monitors as hertz (Hz).
30p is a progressive format and produces video at 30 frames per second. Progressive (non-interlaced) scanning mimics a film cameras frame-by-frame image capture. To the human eye this is known as REAL-TIME. There are a lot of variation of ways of presenting these numbers, many of which are misleading (Like our US Government system about taxes and national debt). Remembering that 30 frames per second is real-time, but that is for a single video stream. So let’s say you wanted to accommodate 4 cameras simultaneously, all in real-time, you need 120 frames per second available and its full unshared resources. To delve even further you have to question at what resolution is the real time image being displayed. Many systems can only record real-time if the resolution is lowered. With this new found knowledge, do you want to sacrifice frames over quality or vice versa? I wouldn’t!!! It’s much simpler to find and procure a system that can accomplish both simultaneously. To reiterate, let’s review:
One thing you need to be careful about when analyzing specifications of a DVR with respect to “frames”, “fields” or “images” per second capabilities is what do they all mean and how they are relevant in your pursuit of quality:
- The total number of frames/images per second for the entire card to be spread across all cameras (cumulative total)
- The total number of frames for each individual channel
- The maximum frame capacity of the hardware not taking into account software switching, simultaneous functions, etc. (rated hardware capacity)
- Display speed
- Recording speed
- “I” frame or “P” frame calculation
- A combination of all of the above
Ask yourself, Is it really images or are they even calculating frames which also provides misleading figures. The frame rate issue is a very tricky one. The fact is the speeds that manufacturers quote are usually the “maximum” obtainable, meaning under ideal conditions, and does not take into account anything else the DVR, software, or video card might be doing. To add further to the confusion are some manufacturer quotes “IPS” (image per second) and “FPS” (frames per second). Why do they do this? Because 2IPS=1FPS. Therefore, in another math quotation, it takes 60IPS to equal 30FPS or a single real-time image. It becomes more convoluted because in “images” per second there are “initial” frames and subsequent frames which refresh only changed portions of the image. The DVR’s video card “captures” the image and records video, but what plays back and displays the video on the screen? The answer is the video card. Even though it is “capturing” (encoding) the video, it also handles the video display on the card (the decoding process).
I hope I was intuitive and insightful in this matter. Until next time guys, stay awesome.