In photography there are few arguments that are more misunderstood than sensor size and what it means to the pictures you can take.
In the world of digital photography 35mm is considered “Full Frame”, meaning that other sensor sizes are usually presented relative to 35mm.
Sensors that are smaller than 35mm Full Frame are usually referred to as “crop” formats. The most popular “crop” format is APS-C. While 35mm FF sensors are 36mm wide and 24mm tall, an APS-C sensor is 25mm wide and 17mm tall. Aside from the size difference, APS-C and Full Frame sensors have no other pre-defined differences.
Cameras with larger sensors than 35mm FF are usually referred to as “Medium Format”. In digital photography “Medium Format” isn’t to be confused with film’s version of “Medium Format”. In digital photography a camera is known as “Medium Format” when the sensor is larger than an FF sensor.
In film, the smallest medium format cameras shot on 6 x 4.5 film, and the typical medium format was 6 x 7 film with some cameras shooting on 6 x 9. These dimensions are in centimeters.
The following image visually shows the differences in various formats:
Sensor size may play a roll in why larger sensors cost more money to manufacture.
The question everyone wants to know is which sensor size is best? Is full frame still the best or are crop sensors good enough now that it doesn’t matter anymore? And what about medium format, is that better than Full Frame?
First, I’ll try to answer why crop sensors even exist. The answer is mostly down to cost to manufacture. Due to the way sensors are made a bigger sensor costs more to make. Now, I know most of you are not going to just believe it when someone claims “x costs more because of… things.” So here goes the in depth reason for why Full Frame sensors cost more than APS-C or MFT.
Camera sensors are semiconductors just like computer processors and they’re printed on wafers in much the same way. One of the main costs behind the sensor is the wafer the sensor is printed on. Only a certain number of sensors can be printed on each wafer. This means the number of sensors per wafer correlates to the cost per sensor, as larger sensors equate to fewer sensors per wafer and that means cost for each sensor is higher.
One way camera companies save money here is by using an older “process” which is the term typically used to define the minimum size of the features that can be designed into a semiconductor. Older camera sensors used a 300nm process. Modern sensors use a 60nm process. All that still pales in comparison to CPU’s which are currently at a 7.5nm process (would be very expensive for camera sensors).
Because the process isn’t as refined it means less finely polished wafers can be used and that helps lower costs, but the cost is still very high. Wafers can cost many thousands of dollars each. In fact, one of the hardest parts of making semiconductors is making the wafers. The wafers that sensors are printed on must be polished extremely finely with absolutely 0 imperfections. If you’ve ever had a camera with a “dead pixel” in it that is often due to imperfections in the wafer itself.
Such imperfections may not be the wafer’s fault, it could be a tiny speck of dust that landed on the wafer at some point, invisible to the naked eye, but a giant glaring problem for a camera sensor.
All these things add to the cost because nobody wants imperfect sensors, so a sensor with imperfections is often simply thrown away.
These imperfections are one thing that make producing large sensors extremely difficult. As it is nearly impossible to avoid a certain number of defects on a wafer, a disproportionate number of sensors are affected the larger the sensor gets.
Modern sensor foundries typically use 300mm wafers that are circular. Why circular? Because the lenses used to etch the sensors are circular too. The manufacturer must fit the rectangular sensors into the circular wafer. Fitting a square peg in a round hole doesn’t work well. Because of how geometry works, the larger the sensors are the more space is wasted on the wafer. A company making FF sensors must figure out how to account for these losses.
24 Full Frame sensors can fit onto a 300mm wafer. If the wafer costs $1,000 that puts the cost per sensor at $42 per sensor. For APS-C the number is much higher, about 80 APS-C sensors can fit on a 300mm wafer, which puts the cost at $12.50 per sensor.
Now add in defects. I’ll just guess and say an average of 2 defects per wafer. With 2 defects that means 2 sensors are lost on each wafer. Now the FF sensor cost is $45 per sensor while the APS-C cost only increases to $12.80 per sensor. The increase due to defects for the FF sensor is nearly 25% of the total cost for 1 APS-C sensor!
There are other concerns here too such as time. Essentially, APS-C sensors can be made 3.5x faster. So, if it costs a million dollars a day to run a sensor factory, that is another cost that APS-C lowers compared to Full Frame.
If we really do the accounting on costs, it’s clear that Full Frame cameras are more expensive for a reason, and they might even be a better deal than APS-C. Consider the Canon EOS RP which retails for $999. How many full-size APS-C cameras are cheaper than that?
Crop sensors may be cheaper than FF sensors to make, but they’re also easier to design lenses for. Crop sensor lenses can be smaller, lighter, and cheaper than FF lenses while still offering good brightness and decent image quality.
Does that mean FF is just too expensive to work with? Far from it, but it does mean that FF camera and lens designers have to work a little harder to produce images that will have the same excellent image quality across a larger image circle.
One advantage of crop sensor cameras is that they often work fine with full frame lenses. This enables a variety of possibilities, one of which is using a “speed booster”. A speed booster is an adapter with a special optic in it that shrinks the image circle from the lens down to the crop sensor size. What this does is cause the image circle to get brighter. When going from FF to APS-C with a speed booster the result is about a 1 stop brighter image. This can help get around deficiencies of crop sensors at higher ISO settings. The other thing it does is it lets the lens work the same as it does on FF. So, if you use a 24-105mm FF lens on an APS-C camera with a speed booster, the image won’t be cropped in anymore and the lens will work like a 24-105mm on FF.
Speed boosters vary in quality from brand to brand and some of them do not allow AF to work anymore, but they’re still an interesting way to get more out of a mirrorless crop sensor camera when using FF DSLR lenses.
With all the advantages of crop sensors being listed it almost seems like FF has been left out of the fun! Well, some of the same things can be done with FF sensors as 35mm cameras were never the largest format cameras available at any time of their existence. Therefore, there are plenty of Medium Format film lenses that can be used with speed boosters on FF mirrorless cameras.
Compared to APS-C a FF camera has a few other advantages as well. One of the most important is the ratio of frame size to aperture size. On FF cameras this ratio allows for more control over background blur. While it is possible to get close to what FF can do on APS-C, it usually requires a similarly large and expensive lens. And as the sensor gets smaller then it gets increasingly difficult to control background blur using the physical lens in any way that makes sense for having used a smaller sensor. The reasons for this are all physics related. It is doable but the lens will have to be disproportionately large compared to the camera when much smaller sensors are used. An example is cellphones that use special blurring algorithms to fake background blur. On cell phones the lens aperture is far too small to generate a strong blur, so companies like Apple resort to doing blur digitally. This can work ok for silly family pictures, but most photographers and videographers prefer to use the physical lens for background blur as it is 100% exact with no unusual artifacts. This necessitates the use of FF sensors and FF lenses most of the time.
It turns out that the FF sensor size is the sweet spot for sensors. It’s small enough to work well for long focal lengths up to 800mm or so, and yet it is large enough to generate lots of background blur when used with fast zooms like a 70-200mm f/2.8 or fast primes like a 35mm f/1.4.
You might be thinking, wouldn’t it be nice to use MFT and utilize “crop factor” to make an 800mm lens a 1600mm lens? Well, yes it would, but the use cases are more limited than you might think, and that is all down to the atmosphere we live in. While frigid areas such as mountain tops and Antarctica will offer crystal clear air year round, most of the places where people live have lots of atmospheric disturbances that limit the resolution that super telephotos can achieve. So while you may be hoping to use a MFT sensor to zoom into 2000m, you won’t really see anything more by doing that due to atmospheric distortion.
Medium format film is far too large to work for a super-telephoto that would be affordable or usable by normal people, and smaller formats like Micro 4/3rds are giving up a lot of physical lens capabilities that professionals appreciate at shorter focal lengths.
The 35mm frame size is the best in terms of standard physics, it makes the most logical sense for general use cameras and lenses, and a vast number of other reasons given the technology of cameras we’ve known for the past 150 years.
However, as mentioned earlier, superior processes exist that could be used for making camera sensors. The issue is those processes are too expensive right now which makes using them for large sensors a difficult proposition. But it would be much more economical to make small sensors, such as cellphone camera sensors, on the most advanced processes.
It’s these in-between moments where cost may temporarily make it seem that crop sensors can be superior to full frame.
In terms of performance its highly dubious to call crop sensors “better”. Whatever technology is used in a crop sensor can be used in a Full Frame sensor as well. It may cost more but that’s a different question. The real question is, which is better? Ultimately, one can crop the image from a high-resolution FF sensor. In fact, on my EOS R5 I can shoot 4k on the full sensor, and I can shoot 4k with an APS-C crop. In this case the larger sensor can basically function as two different sized sensors. If I wanted to, I could use an APS-C lens like the Canon EF-S 17-40mm and shoot 4k video with it on my R5, just like what can be done on an APS-C sensor or MFT sensor. There’s no great reason other than cost to choose one of those smaller formats over Full Frame.
We can argue this point back and forth for days but the main reason APS-C exists and is popular is the cost advantage, and it is a big advantage. For instance, the new Canon EOS R7 is $1,499, and while that may seem a like a lot, let’s compare it to the older EOS R camera.
When the EOS R came out it seemed like a really great deal, but when the R7 came out it’s hard to imagine too many people will buy the EOS R anymore. While the R is still a very nice camera that works well for general photography, the R7 seems to offer a lot more for the money, at least on paper.
End of the day, which is best for you comes down to how much money you want to spend and what you’re hoping to get out of each camera.