Art Lenses and the Photographic Print – are you wasting money?

Lightjet versus Lambda – are you wasting the money you spent on expensive art lenses?

Large format printing and why the LightJet is superior: It’s in the details – literally, The construction of the Lightjet is superior to the Lambda due to the ways that each device projects it’s laser light onto the photographic paper.

The LightJet loads it’s chromogenic print paper into a perfectly round, precision engineered drum with the laser beam that travels dead center along the axis of the drum’s circle.  This means the laser always strikes perfectly LightJetLamdaLaserCompareperpendicular to the paper, a perfectly round laser dot across the entire image area. The result is maximum sharpness and detail across the entire print – corner to corner, edge to edge.

Unfortunately the Lamda uses a stationary laser that swings in an arc as the paper moves along a track. This causes the laser to be “bologna cut” as it moves away from the center of the print towards the edges – creating longer and longer oval pattern. So the only perfectly sharp area of this print is precisely down the middle. As the laser moves towards the edges, the print increasingly suffers detail and sharpness loss. While this design allows for extremely long prints over ten feet, the loss of quality is substantial and noticeable.  Such print lengths provide productivity benefits to the company making the print, but not to the fine artist customer looking for the finest print available.

“For photographers who have invested in expensive art lenses to get edge to edge sharpness and enhanced IQ, it’s clear that the flat-transport technology is taking away the benefits you paid a bundle to get.” ~ John Harris: 30 year industry veteran.

We are a nationally recognized leader in fine-art grade large format archival printing for the professional Creative. We price competitively whether you need one print or a full edition, and our TrueArt™ Process guarantees your satisfaction.

Learn more about our chromogenic print options. 

The Difference Between Pigment Prints and LightJet Digital C-Prints

Q: What is the difference between lightjet digital c-print and Giclee? Which is better quality? Thanks! ~ A. A.

LightJet uses laser light to expose chromogenic photographic paper, which is then chemically developed and fully cleansed to create the archival dyes that render the fine-art image.

Giclee printing uses electrical impulses to deposit archival pigments onto fine art substrates such as canvas or watercolor papers, similar in the way a home ink-jet sprays inks.

As for your question about which has better quality: Though you will find fans on both sides of the fence, neither is really lesser to the other for “quality” but they each have their differences that can be appreciated. Fine art photographers tend to prefer the LightJet Digital C-Print because the photographic print has a certain look and feel that works very well with the art-form and the color tends to be less artificially saturated and thus feels natural. Giclee Pigment Prints are often favored by fine artists due to the substrate selections of watercolor paper or canvas being closer to that of their original artwork. They both posses extremely high sharpness and wonderful color, contrast and detail. The Lightjet is continuous-tone and does not use dots, allowing for smoother tones and detail in highlights with richer saturation in the shadows. The Giclee Pigments allow for more mid-tone color saturation, especially in the yellows and magentas.

Lightjet and Giclee Pigment are both for reproduction of fine art, and exceed the quality of consumer level printing by significant margins. When combined with professional archival fine art substrates and the skills of a master printer the result is a genuine fine art print. Both prints are museum quality and as such, certificates of authenticity may be used with integrity.

Our LightJet and Giclee Pigment prints have been hung in fine art museums and the Smithsonian, so rest assured you are getting the “real deal’ in a fine art grade print regardless of your choice.

Picking the right colorspace based on image content

An often overlooked aspect of color-spaces is the ability to use them to affect the overall “look” of the image. This 3D model represents 4 color spaces:

Pro RGB (in red), Adobe1998 (translucent white), sRGB (white wire-frame) and in yellow; a professional giclee printer – the Epson 9900 on Ultra Smooth Fine Art Paper.


Top view of these gamuts shows their maximum saturation limits. The yellow wire-frame in the center is the available gamut of Epson’s 9900 Inkjet Printer.

The top view shows the saturation boundaries of the colorspaces. The larger the space appears here, the more saturation the color space will support.



Gamut brightness limits of ProPhotoRGB, Adobe1998, sRGB and the Epson 9900

Gamut brightness limits of ProPhotoRGB, Adobe1998, sRGB and the Epson 9900








The side view shows the brightness levels available in the various color spaces. White is represented at the top and black at the bottom.



Color-spaces with larger hulls allow for greater saturation limits. This means a red with an RGB build of 255, 120, 120 will appear more saturated in ProPhotoRGB than it does in Adobe 1998.  Neutral colors will appear identical for hue across the color spaces, though the density (brightness) of those neutrals may differ.

How this affects the look of the image is quite dramatic. A side effect of saturation limits is it’s affect on the visual difference between two neighboring color values.  The examples below are screen grabs of the same color build across the three most popular working color-spaces. The left side of the boxes are a build of 255R 255G 126B, and the right sides are 255R 255G 112B

The color variation is barely perceptible in sRGB, but noticeable in the slightly larger Adobe1998 and more so in the much larger ProPhotoRGB. You can also see that as the size of the colorspace increased, the saturation increased.

Images with subtle variations in tone may be adversely affected from the use of a larger colorspace such as ProPhotoRGB, however if an increased separation is what you are looking for, tagging your file as ProPhotoRGB may benefit.

These samples, like the ones above contained all identical Photoshop builds, but were assigned different spaces. As the size of the color space increased you can see that the color separation also increases resulting in a loss of subtlety.  This loss means in increase of color noise, and in 8 bit files: a potential for banding. Real 16 bit files (not files converted from 8bit to 16) have a small likelihood of banding as long as they remain in 16bit. However, large portion of professional printing devices will eventually convert your 16bit file to 8bit for printing, and this could result in banding issues. Regardless of bit-depth, saturation will be higher in the larger spaces, so it’s something to be aware of and use to your advantage when needed.

This image is in the sRGB color-space. Notice the presence of subtle tones

The variations in tones in this image are pleasant yet still fairly subtle.

The variations in tones in this Adobe1998 image are pleasant yet still fairly subtle.

The subtleties in the lilly pad are nearly lost. A visible increase in color noise is also present in the water.

The subtleties in the lilly pad are nearly lost. A visible increase in color noise is also present in the water.

You will notice in the examples that as file is moved into spaces of increasing size, subtleties in the colors can be lost.  You can see larger views by clicking on the sample images.

As saturation increases, the the visible difference between neighboring colors increases. One artifact of this is an increase in color noise. This becomes quite apparent when comparing the reflections between the sRGB and the ProPhotoRGB files.   Also worth noting is how the “sky” colors in the reflection actually lose saturation with the larger ProPhotoRGB space. This is due to Adobe1998 and sRGB having greater saturation in a significant range of values in this region of color. So if sky saturation is of critical importance in your print, do a bit of testing before you commit to ProPhotoRGB and compensate when possible. Sometimes we get to accept some benefits at the expense of others, and working color-spaces are no exception.


sRGB color-space file converted to the Epson 9900 print space using perceptual intent.


Adobe 1998 RGB color-space file converted to the Epson 9900 print space using perceptual intent.


ProPhotoRGB color-space file converted to the Epson 9900 print space using perceptual intent.

So use your colorspace selection as a tool to further optimize your print results. Be conscious that you aren’t losing or gaining numbers of colors by using a different space, you are merely matching image type to saturation limits and distance between colors. And as always, should you have any questions, reach out in the comment section below!

How Colors are Created in the Digital World

This short basics post will prime you to understand how colors are specified in digital files. In the reproduction market, of which Reed Art & Imaging is a part of, we use digitally driven devices to make faithful reproductions of original art, photographic captures and digital graphic designs. To accomplish this task with any hopes of repeatable accuracy, there must exist a standard system by which colors can be recorded, transferred, translated and output. These standards exist in theoretical color models. These models are a virtual shape, such as a box, sphere. polygon or other shape that if it were real, would contain every color visible to the human eye.

By SharkD (Own work) [GFDL ( or CC-BY-SA-3.0-2.5-2.0-1.0 (], via Wikimedia Commons

The RGB color model mapped to a cube. The horizontal x-axis as red values increasing to the left, y-axis as blue increasing to the lower right and the vertical z-axis as green increasing towards the top. The origin, black, is the vertex hidden from view.

Because the model is represented by a shape, they are referred to color “spaces”, for the space the object would occupy in the theoretical environment of all colors – visible and invisible. The graphic above is an example of a space that uses Red, Green, and Blue to yield the final color we want to create.

Colors come to our eyes in two ways – or transmitted from a light source or reflected off of a surface.

RGB is called the “primary” space and it’s numerical system can be equated to the brightness values of transmitted light – or how intense the Red light, Green light, and Blue light are shining. As the numeric value increases, the lights get brighter and the closer to white they become. More on that in a bit.

In a CMYK model (the secondary space) we are representing pigments that absorb light. So as the number increases in their scale, the more light is absorbed. So with CMYK, the higher the number, the darker the color appears – exactly opposite of RGB.

In either space, the ratio of how the colors are blended determines the color, while numeric values contribute to how bright or dark it is.

For simplicity, the rest of this article will use only one color model. I’ll use the RGB model for these examples because it’s the model that our clients use and best supports high-end reproduction digital printing.

How Color is Expressed

Color is usually expressed in human terms by it’s

  • Value (light to dark)
  • Saturation (how close to pure is it)
  • Hue (red, purple, green, yellow, orange, etc.)

In the data driven world, it’s expressed as a recipe of the colors required to build its final value, saturation and hue. Image and graphics applications usually use the standard scale of 0-255 ( what is called 8-bit color) to represent the amount of each color present, with 0 being none and 255 being maximum. Dark colors being closer to 0 and light colors being closer to 255. Equal amounts of each color create neutral hues ( grays ) and as the numbers increase from 0 to 255 the value moves from black to white.

Darker values are closer to zero and lighter values are closer to 255

Darker values are closer to zero and lighter values are closer to 255


These numbers from 0 to 255 are called “Levels” and in our examples fall into a model of 256 levels – with zero being included as a level.  In an RGB color space, each color is built using various levels, or recipes, of Red, Green and Blue.  Dark Red has a different recipe than Light Red, and the recipes are different for a saturated versus less saturated red.

Fully saturated red is a different build than a less saturated red.

Fully saturated red is a different build than a less saturated red.

Dark Red has a different build than Light Red.

Dark Red has a different build than Light Red.








As you can see in the first example above, a fully saturated hue has 255 of it’s requisite colors and none of the other colors. As the color desaturates, it gains some of the other colors; it’s moving closer to a neutral gray.  In the second example we can see that the Darker Red contains none of the other colors, but the Red number is dropping closer to zero; thus making it “blacker.” This darker red is as saturated as it can get at this present value.

A critical point to understand is that in an RGB or CMYK file, color and density are inter-connected. Meaning that any change you make to color data will result in changes to density and visa-versa.


The other primary colors are built in the same way, like this:

Color builds of fully saturated Red, Green and Blue.

Color builds of fully saturated Red, Green and Blue.


The secondary colors are built from equal amounts of two of the three colors:

Graphical representation of the secondary color recipes

Secondary colors are built from two of the three colors

These secondary colors are thought to be the “opposite” colors to those in the previous example. You will notice their recipes are directly inverse. Red is R255 G0 B0 and Cyan is R0 G255 B255.  They are opposites because when the two colors are combined, they cancel each other out and make gray.  Equal parts of Red and Cyan make gray, same goes for Green with Magenta, and Blue with Yellow.

Intermediate colors such as Orange, Brown, Purple, Daisy Yellow, Lemon Yellow etc. are built by using various values of the three colors where at least one of the colors is greater than 0 and less than 255:

Intermediate color builds

Intermediate colors result from builds using two or more colors.


This 8-bit model, using it’s 256 level per color channel architecture allows for approx 16.7 million variants of color and density.  (256 x 256 x 256 = 16,777,216).

Other bit-depths exist that extend the number of available colors; the concepts are the same, but the numbers differ.

For example: 12-bit color – the depth that most digital cameras record in raw format, has 1,728 levels per color channel (instead of 256) with a total number of 5,159,780,352 available colors, much higher than present technology can reproduce in a print or display.  The commonly used 16-bit depth has 4,096 levels per color channel with a total number of 68,719,476,736 available colors – yes that’s 68.7 Billion!  While some professional pigment printers and their RIPs can support a 16-bit file, getting the subtleties from that many colors on paper and dots via a limiting 8 to 12 different ink colors is still problematic.

If you have questions, post them in the comments below.  If you want to see how this all ties together with Photoshop channels, stay tuned, that’s next!


Photoshop Channels De-mystified

Color channels are often thought to be the exclusive realm of mystics and Photoshop gurus. If you are willing to dedicate a few minutes of time to learning, I’ll take the mystery out of channels, and give you the power to improve your workflow and your end results.

The colors we see on our monitors and in print are created by combining specific amounts of either Red, Green, and Blue, or Cyan, Magenta, Yellow, and Black with the result being a new intermediate color.  Since the majority of our readers are using the RGB model, I’ll stick with that for our examples. If I get requests in the comments below, I’ll add a section explaining CMYK.

Most users of image editing applications like Photoshop or Gimp, as well as users of other graphic design applications are familiar with, or have heard mention of the 256 levels used to define color and density.  Most often these levels are represented by numeric values from 0 to 255 with zero being one of the levels.  In the RGB model, these levels can be equated with visual light, zero being no light, or pure black and 255 being maximum light – pure white.

The 256 levels represent visual density ranging from black through white.

The 256 levels represent visual density ranging from black through white.

When we build colors in the 8-bit RGB model, we are using 256 levels of Red, Green, and Blue in various combinations called a “build”. You can think of the color-build as a recipe for that specific color.

Intermediate colors result from builds using two or more colors.

Intermediate colors result from builds using two or more colors.

Collectively the color channels are nothing more than a representation of those recipes. And when the recipes for all the pixels are put together in the right order, we have our color image. Viewing our color channels is effectively changing the way your cook-book is organized. So rather than finding the recipe for the pixels on one page of your cook-book, your color cook-book has three pages, one each for Red, Green, and Blue. The Red page tells you how much red to use and where, the same goes for the Green page and the Blue page.  So in our example above if we assume that each colored square represents 1 pixel, the Red page would tell us the first pixel would have 255 red, the second pixel would have 68 red and the third pixel has 126 red. The Green page would read: 1=128 and 2=68 and 3=0 and so on for the Blue page.

Photoshop shows us these channels in a way that our minds can easily process: as images. We can grasp the concept of images much easier than looking at the potentially millions to billions of numbers required for single image. Photoshop’s default is to show you these images as various shades of gray (256 possible shades to be exact). Here is what our example above looks like as color channels:

Red Channel

Red Channel

Green Channel

Green Channel

Blue Channel

Blue Channel






Where the build calls for zero of a color, that channel represents the area as black. Where it calls for all of that color, it is represented in the channel view as white. All intermediate values show up as the appropriate shade of gray.


Real World Examples

This image is pretty much straight out of a raw conversion. The file has been optimized in the conversion to make sure that none of the channels contain either pure black or pure white. This is to mimic the way the eye naturally sees. We’ll compensate for its somewhat flat appearance when we show you how to optimize your files without damaging your color fidelity.

copyright John G Harris

Full color view. This is called the “composite” view.

Here is the view of the red channel, remember lighter areas indicate more red, darker indicates less:

copyright John G Harris

Red channel contents.

Here are the Green and Blue channels, you can click them for larger viewing:

Copyright John G Harris

Green channel contents.

Copyright John G Harris

Blue channel contents.







Notice that the lighter areas of the scene show as lighter in all three channels, and the darker areas of the image are darker in all three channels. You can also see that the areas of the image that are green show as brighter in the green channel in relation to the other two.

Also, all three channels have complete detail from shadows to highlights, nothing is lost. This is critical for full color fidelity. This full range of detail is essentially how channels should be. When channels look muddy or if there is “clipping” to full black or full white, there is a loss of color fidelity. I use channel views regularly to examine the state of a file’s “health”. If the file’s channels are not right, then I know right away I can’t generate the best possible print.

It is key to understand that in an RGB color space, a channel is both color and density information. Any change that you do to a channel will affect the color, saturation and density of your file. If you increase any value in a color channel, let’s say moving the red value of an area from 180 to 185, the resulting color will be more red and lighter.

See, no mystics required.

Reach out in the comments below with questions and comments.

Easy and Inexpensive Tips for Better Video Meetings


So there you are, trying to video conference with the a client, vendor, investor, or mom and your video feed, well…. stinks. Nothing makes a bad impression like a bad impression.  I recommend that you always test your video setup a couple hours before you need to go live, making sure your webcam is working and the picture looks good. And just in case you need to call your tech support team or fix it yourself. Here are some basic and straight-forward things you can do to make sure a working system performs well.


Keep it clean!

Lens cleaner and microfiber are your friend. Get a cleaning kit from your local optician and keep your web-cam clean. Spray solution on the microfiber NOT on the camera. Gently remove junk and dust. The lens on your webcam is super tiny, so even a small spec of dust, lint or hair can have a major impact on image quality. Finger prints are worse, and can make your video look like it was shot through plastic bags – yuck. Leave the soft focus effect to Glamour Shots.

Can they hear you over all that noise?

Use a separate mic and turn off sources of background noise. The built-in mic on your laptop will

Head-worn mics sound much better than built-in computer mics and aren't as noise prone as a lavalier.

Head-worn mics sound much better than built-in computer mics and aren’t as noise prone as a lavalier.

likely pick up a great deal of background noise including the sound of your voice echoing off your walls. An inexpensive lavalier (Lapel clip style) mic can be plugged directly into the mic input of your computer. USB podcast mics can be reasonably priced if you don’t need portability. Head-worn mics are super the best of both worlds and unlike the lavalier, they won’t pick up the sound of your clothing as you move about.

When possible, use ear-buds instead of computer speakers. The sound from your speakers will be picked up by your mic and can lead to echos , feedback, or muddiness in your audio. Cheap ones can be purchased at the dollar stores but they’re not so good on your ear health. Be good to your hearing and invest in the best you can afford.

You can also get a head-set that has both head-phones and a boom mic. These are available from bulky down to slim and lightweight. Go light-weight if you’re not into that 80’s air-traffic-controller look.

A combo headset like this is portable, sounds great and can eliminate back-gound noise and echos

A combo headset like this is portable, sounds great and can eliminate back-gound noise and echos









Heloooo? Is anybody there? It’s important to use sufficient lighting.

CFL’s run cooler than halogen and incandescent.In low-light conditions, your camera has to amplify the signal it sees and this results in noise that looks like graininess, ugly color and lack of sharp focus. This get’s worse with lesser quality webcams. The light coming from your monitor should not be considered sufficient.  A minimum of two 60-watt equivalent lamps within 6 feet of your face is a good starting point. A couple of cheap Harbor Freight or hardware-store clamp-on work-lights – one pointed directly at you and one bouncing light off the ceiling can create a soft and pleasing look. Use compact fluorescent bulbs since they run cool and won’t heat up your office.

Avoid back-lighting else you look like a talking silhouette with glowing edges. This type of lighting can also create havoc with the auto-exposure systems in your camera that can result in a visual pulsing that will serve quite well to annoy your viewers.

Inexpensive and available from tool and hardware stores. The larger the reflector, the softer the light. Get better light by using two or more.

The bigger the reflector the softer the light. A 10.5″ dish is better than a 6″ dish. You can also paint the interior white to soften the light a bit more. This will help reduce pore detail and the visibility of wrinkles too! Not that any of us are actually concerned about such things…







Yeah okay but those work-lights look terrible in my carefully designed office. What then?
Ikea has some great looking work lights in both clamp, table and floor options.

Certainly a step-up from the look of a shop light. Would also make a great background light.


Nicely styled clamp light. Moves easy and clamps about anywhere. Larger reflector provides a decent light. Point one at you and bounce the light of the other off an opposite wall for great looking light.


Part of the same Ranarp series, this could easily be combined with a couple of clamp-ons to create some fantastic light for your video sessions.


China Ball style lantern from Ikea for wrap-around soft light

China Ball style lantern from Ikea for wrap-around soft light

In the professional video world there exists a type of light called the “China Ball”. Inspired by the round paper lanterns of China, these cast a omni-directional light that is super soft, very flattering and somewhat mimic the look of a professional soft-box except they throw the light everywhere – not in just one direction. The lighting is not inspiring from an artistic cinemagraphic point of view, but the lights look nice in the home or office. The lanterns are intended to be hung from the ceiling pendant-style and can be found at Ikea and import stores for around $5. These are just the lanterns. You will also need a light kit that includes socket, cord and built-in switch.

Ikea has many stylish lighting options that mimic the china ball for under $20
This model allows for both bounce and direct lighting in one. It is a torchiere with a side light mounted on a gooseneck that can be pointed where you like.


Quality video streaming also requires a good network, fast internet and a computer that isn’t running at a crawl. For more tips on improving your video chats, check out the post: Improving Your Google Hangout Experience.

Do you have some tips you would like to share? You can show your stuff and help others by adding your ideas to the comment stream below.

Improving your Google Hangout Experience.

GoogleHOABadgeThe rising star in online networking is the Google Plus Hangout On Air, or HOA for short. This medium mixes the experiences of video conferenceing, webinars, screen sharing and chat all in one easy-to-use package. The affordable (can you say free?) tool also comes with the added benefit of increasing your SEO, your personal brand and the leverage of your YouTube channel.

If you are not using HOAs now, I urge you to look into them. Here are a couple of resources I highly recommend to get you on the right track towards understanding the benefits.

Entrepreneur and Social Media coach; Sandra Watson over at provides valuable direction for those new to any social media platform.

Carol Dodsley has a G+ mastery course for those who want to dig deeper into the G+ community. She also hosts several weekly shows on G+ that cover a range of topics. You can find one of Carol’s posts espousing the virtues of G+ HOAs here. has a great post that makes a great business case for the use of HOAs

Regardless of the platform, a good video conferencing experience requires some attention to detail to avoid bugs and other road-blocks. 

Having troubles with your video dropping out during an HOA?  Not getting clear video into your stream? Here’s a few things to do before you start your broadcast:

Attach to your network via Ethernet cable and turn off wireless at your computer.  Unless you are running the new experimental gigabit wireless, your Ethernet is likely to be much faster and less problematic.

Turn off all devices on your network that do not need to remain on during the broadcast.  When devices are on , they are routinely sending various signals across the network, potentially creating congestion. This network traffic then get’s “heard” by your computer causing it to take processing cycles to evaluate the traffic and determine if it is something it needs to pay attention to. Quieting things down on your network will help your computer focus it’s attention on your feed.

Speaking of quieting… Network and modem cables should never be running parallel and close to a power cord.  Power cords emit a small amount of radio frequency interference (RFI) that is picked up by your network cables. This causes glitches that will effect data transfer rates ( slows your network down). It’s nearly impossible to route these completely separated as often they at least need to cross over each other to get to where they need to go – in this case, do your best to cross them perpendicular so they look like a plus (+) sign.

Same goes for USB and Microphone cables too. Keep them away from power cords when possible for all the same reasons.

Use the chrome browser when possible. It’s developed by Google and will likely be the most stable for the hangout plugin.

Speaking of plugins, they suck.  Memory and resources I mean. 🙂 They consume ram, processor resources and are constantly pinging the network.  Turn off any plugins, search bars,  and extensions you don’t need for the broadcast.

Close any browser tabs you don’t need open. One tab can consume between 50 and 300MB of addition memory, depending on what is loaded into that tab. Also, tabs that are open could be sending traffic across your network. Shhhhh…. a quiet network is a happy and speedy network.

Turn off ALL other applications – including browsers – you don’t need during the broadcast. Not only are they slowing down your computer, they are likely using your network. Email apps are always looking for new email. You don’t want to be downloading 25MB of attachments while you are trying to stream 3MB per second of HOA video.

If you are running windows, you can temporarily turn of automatic updates to prevent activity during your HOA. Just remember to turn it back on later.

Run a valid copy of a good anti-virus and anti-malware application and keep it current and up to date. An infected machine = a slow machine.

If all of this is not enough to get things looking good then:

In dire conditions where you have done all of the above and are still having video drop-outs, uninstall any applications that you don’t use on your computer. Many of these applications monitor your network to talk to the devices you just shut off. Printer utilities are a big resource sucker and can often be uninstalled. Do you really need some bit of software to nag you when you are low on paper or ink?  Some of your installed applications will also check the internet every few minutes to see if there are updates available that need to be installed – thus slowing your network.

On windows machines: turn off file indexing. This “feature” does make it faster to find files on your machine, but it is also doing a great deal of disk reads and writes, perhaps during your broadcast.

Whew! Sounds like a lot to do, but it’s not really all that much.  Once you have cleaned your machine of any malware and removed old applications you don’t need, and moved your cables the tedious work is done.  When you are ready to do an HOA the easy thing is to reboot. This will close any applications you have running. When the computer comes back up and you login, open just Chrome, launch one tab to G+ and you should be on your way to a great HOA experience!

Don’t discount the benefits of a good mic, and adequate lighting. For more on that, have a look at the post: Easy and Inexpensive Tips for Better Video Meetings

Megapixels Aren’t the Only Factor to Consider When Buying a Camera

megapixSome questions from our clients and our readers seem to come up more often than others. Many of those questions center around the importance of Mega-pixels. This question came across my desk this morning and since it’s often relevant to our readers, I am sharing my response with you.




“When customers order larger photographs from the lab…let’s say using the Kodak Metallic Print paper…is anyone able to tell me how many pixels their cameras usually use or how does one figure this out?…I am thinking about upgrading my Camera and would like the ability to offer larger prints without losing quality.  I know that some of the new cameras offer more pixals some offer 20 etc.  However, is that what I truly want to look at?”

The good news is that you are asking the good questions. The flip-side is that this is opening a door to a warehouse full of more questions.

Yes, megapixels are an important factor to the end result. It is only one factor however.  Megapixels are top of mind for everyone because the camera manufacturers need “features” they can market with.  They are looking for ways to set their product apart from the others, and this stat is one that is easily digestible to the consumer. We tend to like easy comparisons, and anything with a number fits that bill nicely. Unfortunately, the marketers rarely tell you the benefits of the various features and leave it up to you to infer them.

Here is a short list of what are often considered the primary stats to consider:

  • Chip Resolution (megapixels)
  • Raw file capabilities ( shooting in raw provides greater editing flexibility after the shoot)
  • Max ISO ( high ISO with low noise is generally considered favorable)
  • Dynamic Range ( The ability to capture shadow detail and highlights in the same shot)

There are other considerations that are often driven by an individuals needs.

  • Price
  • Video capable (frame rate and resolution are important factors for video quality)
  • Max burst rate ( more frames per second is important to action shooters who will shoot in bursts to try to get the perfect shot – think sports photography)
  • Auto bracketing ( Auto brackets help you get maximum dynamic range if the scene’s range is greater than the camera can capture in a single shot)
  • Auto HDR ( takes a bracket and automatically merges them for highlight and shadow detail)
  • zoom level if it’s fixed lens (The higher the X number the more zoom range from wide to telephoto)
  • auto focus speed ( important when you are shooting moving objects or if shot timing is critical)
  • max f/stop – again if it’s fixed lens ( f/2 lets in more light than f/3.5 and thus allows for faster shutter speeds)
  • included software ( some cameras come with specialized software – usually consumer grade software)
  • form factor ( How big, how heavy, what’s the shape and color, etc.  Will you carry it in a pocket or purse? Around your neck? etc)
  • tethered shooting ( remotely controlling your camera from a laptop, tablet or smartphone allows for instant downloads of the captured image to your device for enlarged viewing and fast editing)
  • position and accessibility of controls ( how fast can you get to often used controls such as shutter speed, aperture, white balance, and any settings that are important to you. Also are you likely to accidental bump something during casual use and handling)
  • bells and whistles ( fancy stuff like shooting in sepia or black and white, special effects, built-in timers for time-lapse, etc.)

Since your question was in regards to image quality in relation to enlargement, I’ll focus comments there.

There are several generally accepted aspects to image quality:

  • Pixel dimensions (megapixels = file resolution H x W)
  • Image resolution (actual sharpness – it’s a combined result of lens sharpness and pixel resolution)
  • Color fidelity ( color accuracy for every pixel – this affects how true-to-life the image is)
  • Noise level ( less noise is generally considered ideal – noise looks like film grain)
  • Compression artifacts ( these are generally considered detrimental as they destroy color fidelity and detail – to avoid these you will need a camera that shoots RAW or TIFF in addition to the usual JPEG)
  • Tonal range (contrast and detail without clipping to pure black or pure white – the ability to capture shadow detail and highlights at the same time)

All of the items are important to quality, but items in bold are specific to how big the image will reasonably print before the quality drops to unacceptable.   While pixel count is certainly important, equally, if not more important is lens quality. Pixels are not a representation of sharpness, but of resolution. While the two are inter-twined, sharpness is in my opinion a bigger factor.  We have printed many files that had low pixel count but were shot with really nice lenses. The results are better than a high pixel count file shot with inferior lenses.  A not so good 24MP file will not print as well as a good sharp high quality file from an 18MP file scaled up to 24MP  If your budget demands picking either good glass or high mega-pixels, I would suggest you go with the better glass – you’ll get a better return on your investment. Stats and specs can be misleading, so use them as general guide, not as gospel and remember, more is not always better. Especially if you are giving up something more important to get the “more”.

Noise level varies from model to model and is a result of the quality of the chip, the amount of light falling on the chip and the quality of the camera’s internal computer and it’s software.  It can also come from the software you use on your computer to process your RAW files. Part of the cost of a pro-level camera is to pay for the high-end and high-speed processors and CCD chips used in the body.

Most in-camera file compression is destructive and at varying degrees. In my opinion, JPEG is not the ideal file format if detail in the print is of paramount consideration. The compresson process throws away critical detail and is very damaging to the color fidelity. If you pay for a 24MP camera and shoot jpeg, you may only get 12-18MP worth of real detail and around 6MP of color fidelity.  You can learn more about RAW versus JPEG in a previous post here.
When choosing a new camera, make a thorough list that covers what kinds of shooting you do and what features and controls you use for that style. Use that list to determine your must haves as well as any features that would just be nice to have.  Here is an example such a list:


  • Top Shutter speed
  • Aperture priority
  • flash sync
  • white balance
  • interchangeable lenses
  • high ISO
  • Tripod mount
  • Vibration reduction for hand held shooting
  • Jpeg and raw in single capture


  • Top Shutter speed
  • Aperture priority
  • flash sync
  • white balance
  • interchangeable lenses
  • Tripod mount
  • Bracketing
  • Tilt-able view screen for low angle shooting


  • Top Shutter speed
  • Aperture priority
  • flash sync
  • white balance
  • interchangeable lenses
  • high ISO
  • Tripod mount
  • Vibration reduction for hand held shooting
  • Tethered shooting
  • Jpeg and raw in single capture


  • Large megapixels
  • Full-frame sensor
  • Uses my existing lenses
  • Large view screen
  • Lightweight
  • Accepts accessory grip with additional battery
Now distill this down to one list to remove the duplicates, refine the details then put them in order of priority for you:
  1. Interchangeable lenses
  2. Uses my existing lenses
  3. High megapixels
  4. Full-frame sensor
  5. Top shutter speed 1/5000 or better
  6. Aperture priority
  7. flash sync
  8. white balance
  9. Tripod mount
  10. high ISO
  11. Bracketing
  12. Large view screen
  13. Tilt-able view screen for low angle shooting
  14. Jpeg and raw in single capture
  15. Tethered shooting
  16. Lightweight
  17. Vibration reduction for hand held shooting
  18. Accepts accessory grip with additional battery
With list in hand, the internet or a good camera store should be your next destination for finding models that fit your needs. Nothing beats a well informed and experienced camera sales-person. If you have your list, they can often point you to a few selections in a matter of minutes. It might cost a small amount extra to buy in the store, but the time and frustrations you save instead of doing the search yourself can be worth it.
In the comments below, share what your priorities are and your methods for picking the ideal camera or other tools in your arsenal.  I’ll send the first five helpful responders a nice gift.

Raw Versus JPEG – What They’re Not Telling You

In the ever-present quest for perfection, photographers from around the country call me weekly with questions about shooting raw versus jpeg. The debate over this topic has been waging strong on the internet since the advent of digital still-image capture. Creating confusion, every photo blogger and “expert” in the forums has their opinions. Each of them expressing “this is the right choice”.  Well today’s post is here to proclaim that it’s mostly bunk. There is no perfect answer that fits every photographer all of the time. The Holy Grail of file type is a myth and it’s time to stop looking for it and get on with the business of taking great images. The two camps in the JPEG versus RAW debate have strong emotional bonds to their “rightness” and are willing to go to great lengths – even as far as to embarrass themselves online while attempting to change the unchangeable minds of the opposing camp. They cling to the strategy of looking for evidence to support their case while ignoring the evidence of the other. In the end it just adds up to more confusion for the reader – who continues to be un-prepared to make their decision. If you are hoping this post will give you the right and perfect set-it-and-forget-it-forever options, you won’t find them, because I don’t think they exist – though you may find one that works for you most of the time. What you will find is unbiased data to help you make educated decisions before you enter a shooting scenario. You will also find enough data to see clearly why I made my bold statements against the “This is always the right way” mentalities.

Let’s get down to business
If you are a professional shooter, regardless of market you will likely have some of the following example criteria to consider as part of your decision making process:

    • On what standards do your customersdetermine quality of service?
      • How important is color accuracy?
      • How critical is the pixel depth (megapixels)?
      • Is dynamic range an issue?
      • What are your expected turn times from capture to delivery?
    • Technical issues
      • Are you shooting under controlled lighting and can control scene dynamic range?
      • What is the expected use of the image?  Web, press, photographic, pigment, all of them?
      • How large will the file be expected to print?
      • Do you have time for custom white balance?
      • Do you have time to verify exposure settings with a quality hand-held meter?
    • Business related
      • Do you see time as money?
      • Are you paying assistants or digital artists to post-process?
      • Are you paying your lab to color correct for you?
      • What is your present customer satisfaction rate and is there room for improvement?
      • Are you willing to spend some time, effort, and resources to impact product quality?
      • Do you expect your workflow to minimize the post-process impact on margins?

If you are a hobbyist, what are you looking to gain?

  • The best possible print.
  • To spend more time with family and less time with post-processing
  • To gain more control over the final image
  • To fit more images on the limited space of a card
  • Technical questions:
    • What is the subject matter?
    • Under what conditions am I shooting?
    • How will the image be used?
    • What is your personal criteria for quality?

Perspectives – it’s all a point of view
Before choosing your shooting format I recommend you first determine your priorities and make a list. When you know what is important to you, then the best choices can be made and most often with higher levels of confidence. For these examples, we’ll look at the typical requirements of each shooter type. Knowing the requirements will lead to understanding why a certain thing might be a priority. Photographers and business models vary, so results and opinions may differ. For the pro, they have to satisfy an end user in order to make a living. Often working with pro level tools to maximize image quality and speed the process. For some of the professional markets such as studios, time is an expense against the profit margin and customer experience may have the largest impact. For other business models such as fine art, it’s often maximum image quality that is the primary target. Studios are the business model most likely operating in some type of assembly-line type of workflow. They have dozens of images from each person or product shot and each of these files needs some kind of attention. Usually starting with elimination of the unusable, then selection of the prime images followed by editing. The artists that are paid to handle this process are usually compensated by the hour. The longer it takes to move a job through the work-flow, the deeper the cut into the bottom line. Quality needs to be maintained to meet or exceed the customer’s minimum expectations. The average consumer’s expectations are often that the professional print should exceed the quality of a drug-store print. As long as they can see a higher level print, that particular expectation is met (photographic skills such as composition aside for the intent of this discussion).  Skin-tones and most all other colors related to people photography fit will inside the sRGB colorspace. Studios have a great deal of control over their lighting, and thus the required dynamic range for the shoot. A good setup can usually hold within a 6 stop limitation of a JPEG work-flow. Interior location photography has additional challenges resulting from ambient conditions that might not be controllable. Office lighting, large windows, etc. can contribute to the overall lighting of a scene and may result in lighting ratios that exceed the 6 stop limit. In profit-centric people photography, merging brackets for HDR is rarely an ideal solution.

Commercial product photography has unique demands, especially when the product or person being photographed requires special staging and effects.
And yet the images themselves usually end up being used in the lowest of gamut conditions: 4-color press and the internet. In a complex shoot, where lighting, effects such as smoke or movement are in play, bracketing is not an option so maximum dynamic range is beneficial.  In table top product photography – think catalog photos – there is no movement, lighting is completely controllable and product colors rarely exceed the basic gamuts of Adobe1998 or sRGB. Since the subject does not move, bracketing can be used to maximize dynamic range.. Food photography brings the potential for highly saturated colors that would do well with a larger gamut and maximum control.  A commercial photo session often includes a day or more of styling, prep and active capture, followed by a similar amount of time in post. There are thousands if not ten’s of thousands of dollars at stake and final image quality can be critical to the customer’s end sales. Such diversity creates situations where JPEG would be most profitable and other times where a RAW work-flow is mandated.   The fine-art photographer is often most concerned with image quality. They seek an integrity in the image that jpeg does not deliver. Maximum dynamic range, sharpness, color fidelity and detail are all sought in the persute of the ideal print that meets the artist’s vision and the expectations of the descriminating print buyer. Fine art images are often heavily manipulated to create the mood sought by the artist and to bring out maximum detail. Through manipulation, detail along with any compression artifacts will also be brought to greater light. Artists will often use improper white balance to enhance mood and emotional response. The artist will often spend countless hours laboring over pre-planning of a shoot, and many financial resources are spent on models and assistants. The final editing is usually performed by the photographer rather than an assistant.

Pick a card, any card…
Prepared with the insights you now have into the requirements of a few professional photographer types, these charts should help clarify why one format type won’t properly cover every photographer’s needs, and how some photographer’s might benefit from both types during their day.

Jpeg Versus Raw, Capabilities by File Format Type

Basic Pros and Cons
Pros Cons
  • Can be any working space you have a profile for.
  • WB can be tuned post-capture.
  • Greater exposure latitude – though precise exposure is recommended.
  • Highest level of adjustment flexibility before causing gaps in histogram.
  • Best option if over-sampling is required.
  • Supported by Pro-level software
  • Non-lossy raw formats contain highest levels of color-fidelity
  • Takes the more time and resources to post process.
  • Larger in-camera and offline storage space requirements.
  • Must be processed before online sharing/distribution can occur.
  • Must be processed prior to printing
  • Additional software required.
  • Not supported by all editing software
  • Smaller file sizes maximize in-camera and offline storage space.
  • Proper WB and exposure can often go directly to print.
  • Easily shared via email and web with no additional work.
  • Lower time investments.
  • No additional software required.
  • Maximum software support both pro and consumer level.
  • Usually limited to sRGB or Adobe1998 at time of capture.
  • Any settings applied in camera i.e. WB, sharpening, noise reduction, etc. are “fixed” into the image – changes require post-capture retouching/editing.
  • Minimal exposure latitude of 1/8 stop.
  • Lossy format means you paid for resolution that you are sacrificing.
  • Typically does not over-sample well due to in-camera sharpening and compression related artifacts.

 Jpeg Versus Raw, Considerations by Photo Type

A successful photographer will learn the needs and expectations of their client, then support those needs through technical and artistic know-how, all the while minding the needs of the bottom line.

You can help our readers by sharing tidbits you have discovered regarding JPEG and Raw workflows in the comments below. And as always, we are here to answer your questions.

Big Changes in JPEG Could Change Your Workflow

The folks over at the Independent JPEG Group who have the job of maintaining all things technical behind the JPEG file format have added some much needed support to the oft-maligned aging file-type. With the release of version 9 of the jpeg software libraries comes 12-bit color support and optional loss-less compression; after all it wouldn’t do much good to have 12-bit color if you lose so much color-fidelity in the compression process. For the true geek in all of us, the new libraries are available for download should you want to try your hand at implementing them into your workflow. Be warned though, that unless you have some serious skills, implementing them in existing software will be a challenge. Open source fans on Linux and MacOS have the best shot at implementation at this time.  You can grab the codec files from

Camera Support

12-bit support should be a welcome addition for raw file shooters as the new standard will allow for  full color fidelity in embedded JPEG previews and in-camera JPEG stand-alone files. This new wider-gamut support could result in JPEG being adopted as a viable workflow option for the serious pro.  Adoption of the new standard will likely take some time in the commercial arena as the big software players and the camera manufacturers wait to see if the social-media buzz will add credence to the spend required for implementation. It’s my prediction that the early adopters will be open-source software and firmware authors – in part because their mind-set is usually quality instead of cost, and in part because their hands are not tied by corporate bankers, investors and bean-counters. It’s not uncommon for open-source software users to get some of the goods long before commercial adoption – as an example, Adobe’s content aware fill had been years-old news to GIMP users by the time Photoshop users heard about it.  Frequency separation processes, de-blurring/de-motion tools and boutique sharpening algorithms could make this list as well as a good part of those were developed in scientific and educational circles and released as open-source, often years prior to commercial adoption. The new JPEG features could mean an short increase in digital camera sales as new models using the technology will appeal to a portion of the market. Users of major manufacturer legacy equipment might be out of luck unless you are willing to used a hacked version of the firmware, and assuming your jpeg codec is not hardware embedded in way that cannot be bypassed. Canon camera users will find the most mature firmware hacks over at Magic Lantern and  CHDK. Nikon buffs can find a handful of hacks online, but none of them appear as mature and feature rich. A place to start looking if you are a Nikon fan might be the fledgling community over at Nikon Hacker. If factory firmware is your only cup-of-tea, then Canon owners might still have hope, as Canon has shown greater interest in supporting legacy equipment with feature upgrades. Nikon users will very likely be out of luck as the major manufacturer tends to release only bug-fixes for their firmware. As a Nikon shooter myself, I envy the attention Canon gives to the best-interest of their users. In the end, users of cameras that support raw, but do not support firmware updates should still be able to rely on software-based raw file conversion to get full JPEG9a support when it’s available.

Now the bad news

It appears that images created using the new codec will not decode properly on software that is using the older versions. This suggests that full implementation of the new JPEG codec into your workflow is likely to require new software in that one would not normally associate such as graphic design/layout apps, browser updates, thumbnail preview generators, operating system patches, even updates to your smartphone will be in the mix.  My advice, don’t rush to adoption unless you have a very solid and critical reason to do so. Attempting to share a new version file with a client who has not fully adopted the software to handle it might lead to ugliness. I could find nothing in the new release that cannot be achieved using a different file format for most needs. For many years the Tiff format has fully supported the larger bit-depths and loss-less LZW compression. PNGs loss-less compression is already web-compatible and enjoys full browser support. Is the new JPEG version a potential game changer?  Perhaps in a few circles. Those circles however will likely touch most any user of digital imaging; just not right away.  When adoption is complete, the changes will be more of convenience than of consequence, but it’s nice to see the old JPEG standard get another face-lift to keep it current.