Sometimes you get sucked punched by sudden nostalgia, urging you to search for youtube clips from 10+ years ago. But when you see them you’re hit with something else: the horror of realizing how bad the image quality is.
Well, as crazy as it might seem: the video is the same as it was 10 years ago. It’s just that your video preferences have changed.
There are many reasons for this change, but basically it can be boiled down to the rise of smartphones – along with improved high speed internet access.
While this might explain the revolution, let’s take a deeper look into what has actually changed over the years. These are the corner stones of what goes into the quality of an image and why you might want to pay attention in order to make your videos look better.
In the early 2000’s the first flatscreen “HD ready” TV monitors started popping up. This was the first real disruptive innovation on the TV market since the debut of colour TVs.
After decades of only minor quality-of-life-changes and slight increases in size – along came flat screens, noticeably bigger and ready for high definition.
But what does “HD ready” really mean? And why was this shift such a game changer?
Let’s start with looking at resolution:
All videos consists of frames which in turn consists of pixels. Without getting too technical – a pixel is the smallest component of an image; meaning all pixels contain data, which together with all the other pixels create a full image. Think of a mosaic where all the individual squares create the final art work.
Naturally a higher pixel count therefore makes for a higher quality picture, since every image has more information to work with.
Pixels are displayed both horizontally and vertically, based (usually) on the aspect ratio of the image. A 16:9 image, for example, can have the resolution of 1280 pixels (horizontal) by 720 pixels (vertical) while a square image can be 1000 pixels x 1000 pixels.
What changed when TV:s became flatscreen and “HD ready” was that the pixel count went from only being able to display 720 pixels x 576 pixels to 1280 pixels x 720 pixels, increasing the resolution significantly and making it the first “High Definition” (HD) format.
Not too long after, monitors with even higher pixel counts (such as 1920 x 1080) also started showing up. With a four times increase in quality compared to the standard “HD ready” screens; this resolution standard gained the title of Full HD.
Even though the technique debuted almost 20 years ago, it took roughly a decade before it became household.
The reasons for this are many but a crucial component was that there wasn’t much content compatible with these new standards. Broadcast TV had a set standard of 720 x 576 resolution (more often than not with a 4:3 aspect ratio), and online video was often cap:ed by both low speed internet and server infrastructure.
Aside from this, the contemporary cameras available for the average consumer were either compact digital cameras and early DV handycams – not always suited for high quality recordings (hence resulting in the example displayed in the introduction).
Technology today however, is evolving at an exponential rate, reaching new heights every day.
Since the HD revolution took off, it has consequently raised the standards and expectations of what we consume.
Today Full HD is the bare minimum both when it comes to TV screens and video feeds.
4K resolution (3840×2160) and even 8K (7680×4320) are now starting to make their way into peoples homes. Needless to say: we’ve grown used to very high quality imagery.
Looking back a what we once held as cutting edge can undoubtably be painful.
Resolution is fundamental for determining the quality of an image. It is however not the only factor.
Bitrate, when it comes to video, is a measurement of the amount of data used to encode a single second of a video; thus the measurement of bitrate always relates to seconds, like megabits per second (mbps), kilobits per second (kbps) and so on.
A higher bitrate improves the quality of a video at the cost of increasing the file size.
It works in correlation with the image resolution. The higher the resolution of your video the higher bitrate you’ll need to make it look good.
To put it simply, if you have a bitrate of 5 Mbps and a resolution of 1920 x 1080 the images are going to be less compressed compared to a combination of 5 Mbps and 4K resolution.
High resolution and high frame rate videos need a higher bitrate in order to maintain its quality.
Knowing all of this, it becomes clear why Full HD at 5Mbps is such an established standard for internet video. Not only does it display the resolution in a fair way, it also makes it accessible for people with a slower internet connection, while also keeping server and bandwidth fees to a minimum.
Along with bitrate comes codecs.
Derived from the words coder and decoder, the purpose of a codec is to take the raw recorded video data and turning it into a manageable and viewable format.
This is achieved by first taking the raw video file and processing and encoding it for storage, to then be able to decode it when you want to watch or edit it.
Some of the most common codecs for personal use are MPEG (.mp3) and MPEG-4 (H.264, AAC) for audio and video.
All of these codecs offer the ability to create small files while not losing quality in any noticeable way. They are commonly used as recording codecs for smartphones and DSLR cameras.
Another reason for them being so popular is their ability to be played on almost any platform and media player.
Recently a new MPEG codec called “HVEC” (or h.265) was introduced for smartphones, allowing better compression compared to h.264, meaning even smaller files without any loss in quality.
It hasn’t exactly revolutionised the video market, as of yet, but that doesn’t mean it won’t play a bigger role in the future. Not everyone is an early adopter – and after all: if h.264 ain’t broke… why fix it?
So what does this all boil down to? Why should you keep track of what resolution you’re filming in or what bitrate your clips have? The answer is… you shouldn’t. Except when you should.
Filming with a smartphone has streamlined the art of video/film making to such a degree that anyone can get a hang of it without learning the nitty gritty of the technical side of things.
Simultaneously our preferences have changed and people are, in general, more informed and critical about the quality of the content they consume. Having a basic understanding of what makes a video look great naturally gives you the edge to stand out amongst the crowd. It can also help you understand and avoid typical mistakes when recording and handling files.
One example is transferring your footage from a smartphone to your computer. It can be tempting to simply attach the files to an e-mail and send it to yourself. This will, however, compress the files immensely; resulting in very low resolutions and bad quality videos.
A better alternative is to utilise other wireless services such as Airdrop (for Apple) or cloud storage services, like Dropbox.
As a Qbrick user, however, it’s even simpler – as you can upload all of your smart phone footage directly to the QVP. That way, you both have it stored indefinitely, and can use it for editing projects within the QVP etc.
Another easy mistake can be tied to being a bit too ambitious and maximising your camera settings in order to record 4K footage in 60 fps and so on. While this will make your raw videos higher quality, it will also drastically increase file sizes – which can cause trouble (again) when you want to transfer the files. Not to mention it will max out your phones storage capacity quicker than you can say Bandersnatch.
Like mentioned earlier: almost all video platforms compress the videos you upload, downsizing both resolution and bitrate to fit their profiles. It raises the question whether it is worth going through the potentially tedious process of maxing out quality settings.
To sum it all up: having a keen knowledge of what composes a great looking video is helpful in different ways. Being able to tweak settings depending on different situations gives you better control over the final output and also makes you avoid longer transfer and upload times. Higher quality is, of course, still desirable and will most likely play a bigger part in the future but it is good to know how to cut corners when necessary.