How To

Should I get Canon or Nikon?

I’ve gotten asked this question a few times lately, and it’s probably a good idea to share my thoughts publicly. Here’s an email conversation I had earlier today:

B.T.: “Simply put, is the Canon 30D or the Nikon D80 the best way to go? […] Was about to get the Nikon D40, but then got a piece of advice that said that Canon might be better in the way of sports photography. I’m not sure if this was a “standard” or a perceived notion. Anyhow, now I’m trying to decide between the D80 and 30D. I know once I buy into either the Nikon or Canon “family” I’m pretty much there because of accessories and lenses.

So… what was it that made you choose Canon? I knew you were considering the D200 for a bit. […] But what are you thoughts on overall image quality between the two given the different types of image sensors (CCD vs. CMOS)? And I’ve actually thought of going ahead w/ the D40 as a stepping stone to the D200. To be honest, I’ve been back and forth a few times… but wondered about your opinion. […]”

My reply, with some additional edits:

I’m always hesitant to give brand-specific advice, because what works for me might not work for you. I have not used Nikon DSLRs yet. People that use them love them. By the same token, people that use Canon DSLRs love them as well. And people that use Olympus DSLRs love them too. And Sigma, and Fuji, etc.

What I can tell you is to try out the camera. Inquire locally, perhaps at your local camera shop, and see where you can rent the camera you’re interested in buying, even if it’s only for a day or two. Then rent the camera from the other brand, and compare. Even if it costs you up to $200 for the total cost of renting them, it’s well worth it considering you’ll be spending thousands on the equipment and will own it for several years or more, particularly the lenses.

When it comes to the 30D and D80, I tried out the 30D for a whole month. Then I went to the store and examined the D80 closely. I liked the grip and feel of the 30D better than that of the D80, but that’s just me, and my hands are different from others’.

What I can also tell you is that it seems the Nikon cameras have a little more noise and they lose some of the detail in low light when compared to Canon. But if you plan to use a tripod for longer exposures or a flash — and both of these devices will allow you to use a lower ISO — the difference in photo quality is going to be difficult to see, so don’t hang your entire purchase decision on this issue alone, unless shooting mostly hand held in low light is going to be one of the main reasons you want the camera.

Once you get above a certain level (you graduate from a point-and-shoot to a DSLR), the brand or the camera itself doesn’t matter that much. It won’t be the camera that takes the great photos, it’ll be you. To a certain extent, the lenses that you use will matter more than the camera body. You can get great photos with any brand of camera, provided you know its strengths and weaknesses and know just how to use it.

One last thought: the CCD vs. CMOS sensor arguments are pretty useless all around. Don’t forget, Nikon itself — while praised for its CCD sensors — uses a CMOS sensor for its flagship model, the D2X. It doesn’t matter what sensor is inside the camera, as long as the camera manufacturer uses it well. It seems Canon makes pretty darn good use of its CMOS sensors, while Nikon makes great use of their CCD and CMOS sensors as well. And after trying out an Olympus DSLR, I was pretty happy with their CCD sensor as well (except in low light). The Fuji Pro line has some pretty interesting sensors as well. And Sigma is doing groundbreaking work with the Foveon sensors in their SD line. The SD14 is a pretty amazing camera, and I would have bought it instead of my 5D if its effective resolution wasn’t 5 megapixels. (Note: the SD14’s advertised resolution is 14 megapixels, because it has three stacked sensors at 4.7 megapixels each, but the effective resolution is still about 5 megapixels.)

The point is to find out what works for you, and know how to use it well. You can only do that when you’ve held the equipment in your hand and researched the field thoroughly. It really helps when you sit down in front of a spreadsheet and add up all of the stuff you want to buy: camera body, lenses, filters, tripods, batteries, bags, sensor and lens cleaning equipment, editing software, etc. You’ll quickly find out what your ceiling price is, and you’ll know what camera body and brand you can afford. And if you compare your choices that way, you’ll have the information you need to make an educated, logical choice. The decision will be all yours, and believe me, you’ll enjoy your equipment a lot more that way.

Standard
How To

Discerning among LCD monitors

I’ve been looking at various LCD monitors lately, because I’d like to get one for my laptop. Truth be told, I’m more confused than when I started. There’s a dizzying array of prices among various brands, in the same size display, and not a whole lot of explanation as to why that is. Sure, every company touts their higher contrast ratio, higher brightness, more resolution, more inputs, etc., but that still doesn’t explain why the prices differ so much.

I’m looking at 20-22″ LCD monitors, and in that range, I’ve managed to find monitors in three price groups:

  • Around $250, I can buy this Sceptre or or X2gen (brands I haven’t heard of). I can also find similar prices from brands like ViewSonic, Samsung, Dell and HP.
  • From $600-900, I can get the 20″ or 23″ Apple Cinema Displays. The thing is, other than the distinctive design, the specs are actually less impressive than those of the much less expensive monitors in the first group.
  • Then, of course, there are brands like LaCie, with their professional LCD displays that start [*cough*] around $1,800 for the sizes I’m interested in.

So I did a lot of searching, and found out that manufacturers can fake the contrast and brightness measurements, so even though everyone touts their higher specs, you can’t trust them. Many of the monitors also don’t list a measurement that’s harder to fake, the gray-to-gray response time. I wanted to compare apples to Apples, if you will.

After a little more spec comparison, I found that the top of the line LaCie monitors list a spec that no one else seems to list, and that is the “gamma correction”. For example, their 321 LCD has 12-bit gamma correction. Less expensive models have 10-bit gamma correction. And that got me thinking: if, at least for LaCie, the price is proportional to the gamma correction bit depth, a higher spec there might be a good thing. But the less expensive monitors didn’t list it, and Apple didn’t list it either. What was I to do?

I gave Apple a call. After about 15 minutes of alternate talking and holding on the line for a sales rep while he consulted with the engineers, I got nothing but smoke and mirrors. Not that I think it was intended. I just think the rep didn’t have the info. He didn’t know what gamma correction was, and the bit depth of the gamma correction on Apple’s displays isn’t listed anywhere in the specs. The person he spoke with in engineering either didn’t know this or didn’t feel like sharing that bit of data. So the rep kept coming back to me with 16.7 million colors, which works out to 24-bit color.

I kept thinking, that can’t be right! Here LaCie is charging over $1,800 dollars for 12-bit gamma correction and Apple claims 24-bit on that spec at less than half that price? They would be an absolute bargain if that were true! But it’s not, at least not for that spec. I don’t doubt the Apple displays can show 24-bit color overall. But I still don’t know whether their gamma correction engine outputs 8-bit (the normal spec), 10-bit (the higher end), or 12-bit (the really high end), and this determines how well that 24-bit color gets displayed. This is important because the higher the bit depth, the smoother the color is. I’m a photographer, and I shoot in RAW. The files I get are either 12-bit or 16-bit color, and I can see some dithering in color tones when I look at the photos on my laptop’s screen. That means that even though my video card can display 32-bit color, my laptop’s effective display is less than 16-bit.

I have a feeling that given their price range, the Apple Cinema Displays are either 8-bit or 10-bit when it comes to gamma correction. If they’re 8-bit, then they’re overpriced given their specs, and they’re charging hundreds more based purely on design. If they’re 10-bit, that’s interesting, and it warrants a closer look.

So, as you can see, I’ve gotten nowhere. I’d love to have a reason to buy an Apple Cinema Display, but it’s got to be a good reason, based on facts, not sales fluff. I like Apple but I’m not a fanboy. At this point in time, I can’t see why I should spend more than $1,000 on an external monitor, so that rules out the LaCie LCDs and the other high end displays. That means if Apple can’t offer me a compelling reason for their higher price, I’ll go with one of the less expensive monitors and see how things work out. If and when I do, I’ll blog about it, so stay tuned. And by all means, if you’ve got some ideas about this, do let me know.

Standard
How To

A bit about Wide Color Range and Lightroom

Those of you who follow my blog know I love color. I always look for ways to increase the intensity and range of the colors in my photos. I like to call it WCR (Wide Color Range). Who knows what it’s really called… Since I’m self-taught, that’s what I call it. I wrote recently about one of the ways I post-process my photos, and have gotten a lot of great feedback on that method. But it’s not suited to every situation. While it works very well for architecture, some nature, and even some portrait photography, the colors get to be too harsh in other situations.

So I started to experiment, and found that Lightroom is quite capable when it comes to achieving most of my post-processing goals. I really like the ability to make tonal and individual color adjustments without opening Photoshop. For example, I find Lightroom’s heal tool much easier to use than the heal tool in Photoshop. There’s a very practical reason for preferring to work in Lightroom as well, and it’s this: every time I transfer a RAW image to Photoshop, it turns into a 45MB file. Add an extra layer, and it doubles in size. That means every finished PSD or TIF file gets to be anywhere from 90-135MB or more. Compare that with 7-8MB for the original DNG file, and you can see how quickly hard drive space becomes an issue, particularly when a typical photo session of mine yields about 300-400 photos or more.

The key to using Lightroom (at least for me) is to be bold, to not be afraid of potentially ruining a photo. There’s always the reset button in case my results are off the mark. That means I can experiment all I want, non-destructively, which is hugely beneficial.

Here are a few of my recent results with Lightroom. In this photo, the sky was a fairly colorless light blue, though there were some tonal differences that allowed me to change hues and their intensity and really bring out the greens.

Green power

Here the sky was a light blue, but I wanted a different look, since I have tons of tree photos in my library.

Sensory perception

This was fairly simple, just slight vignetting with blue and green color enhancements, but I really like the result.

Windswept but steady

This one was a bit more complicated, with lots of tonal, hue, saturation and lightness adjustments. I really like how all of the trees are straight, spaced closed together, and yet still allow a nice view of the horizon. That’s why I photographed them.

Get up, stand up

There was no blood on the tracks in this photo, nor was there any red paint. There were some dark orange rust spots though. I changed their hue from orange to dark red in Lightroom, then increased that particular color’s saturation. Finally, I decreased that color’s lightness in order to darken it. In real life, those railroad tracks look perfectly normal, though rusty from a winter’s disuse.

Blood on the tracks

Standard
How To

My own sort of HDR

I’ve been intrigued by HDR (High Dynamic Range) post-processing for some time. At its best, it renders incredible images. At its worst, average, and even good, it renders completely unrealistic, overprocessed, unwatchable crud. Even some of the best images made with HDR methods seem weird. They’re not right — somehow too strange for my eyes. But, I did want to try some of it out myself and see what I’d get. The challenge for me was to keep the photo realistic and watchable. I wanted to enhance the dynamic range and color of my photos in an HDR sort of way. I also didn’t want to sit there with a tripod taking 3-5 exposures of the same scene. As much fun as that sounds, I don’t always carry a tripod with me.

By way of a disclaimer, I have not researched the production of HDR-processed images thoroughly. I have, however, seen a boatload of HDR images on both Flickr and Zooomr. I did read the tutorial that Trey Ratcliff posted on his Stuck in Customs blog. Of course, we all know Trey from Flickr, where he posts some fantastic HDR images on a daily basis. So, given my disclaimer, realize I don’t say I’m the first to have done this. I’m just saying this is how I worked things out for myself. If indeed I’m the first to do this, cool! If not, kudos to whoever did it before. I’d also like to encourage you to experiment on your own and see how things work out for you. Change my method, build on it, and make something even better. While I’m on the subject, I’m not even sure I should call this processing method HDR. It’s more like WCR (wide color range). What I’m really doing is enriching the color range already present in the photo while introducing new color tones.

When I started, I experimented with Photoshop’s built-in Merge to HDR feature. Using Photoshop, after a few non-starts that I deleted out of shame, I got something halfway usable. Have a look below.

Brook and rocks

Here’s how I processed the photo above. I shot three exposures of that scene in burst & bracket mode, handheld (no tripod), in RAW format. Then, I darkened the low exposure, lightened the light exposure, and exported all three to full-res JPGs. Used Merge to HDR in Photoshop, got a 32-bit image, adjusted the exposure and gamma, converted to 16-bit, adjusted exposure, gamma, colors, levels, highlights, then smart sharpened and saved as 8-bit JPG. It came out okay — not weird, at least not too much, anyway, but still not to my satisfaction. I should mention I also used a sub-feature of the Merge to HDR option that automatically aligned the images. As I mentioned, I shot handheld, and there were slight differences in position between the three exposures. Photoshop did a pretty good job with the alignment, as you can see above. It wasn’t perfect, but definitely acceptable.

I know there are people out there saying Photoshop doesn’t do as good a job with HDR as Photomatix. It’s possible, although I got decent results. Maybe at some point in the future I’ll give Photomatix a try, but for now, I’m pretty happy with my own method — see below for the details.

But first, what’s the point of HDR anyway? When I answered that question for myself, I started thinking about creating my own (WCR) method. The point as I see it is this: to enhance the dynamic range of my images. That means bringing out the colors, highlights and shadows, making all of the details stand out. Whereas a regular, unprocessed photo looks pretty ho-hum, an HDR-processed photo should look amazing. It should pop out, it should stand out in a row of regular images. It should not look like some teenager got his hands on a camera and Photoshop and came up with something worthy of the computer’s trash bin. As I’ve heard it from others, the standard way to postprocess a scene in HDR is to take 3-5 varying exposures, from low to high. Those exposures can then be combined to create a single image that more faithfully represents the atmosphere and look of that scene.

But, what if you don’t have a tripod with you? Can’t you use a single image? Yes, you can shoot in RAW, which is the equivalent of a digital negative, and good HDR software can use that single exposure to create multiple varying exposures, combine them and create an image that’s almost as good as the one made from multiple original exposures.

What if you want to make your own HDR/WCR images, in Photoshop, all by yourself? I wanted to do that, and I think I arrived at a result that works for me. Here’s what I did. I took a single exposure of a brook in the forest, which you can see below, unprocessed.

Brook, unprocessed

There’s nothing special about this photo. It’s as the camera gave it to me, in RAW format. The colors are dull and boring. There’s some dynamic range, and the color range is limited. It’s all pretty much made up of tones of brown. I took this single exposure, converted it to full-res JPG (but you don’t have to, you can use the RAW directly,) put it in Photoshop, created three copies of the original layer, called them Low, Medium and High, then adjusted the exposure for Low to low, left the exposure for Medium as it was, and adjusted the exposure for High to high. Then I set all of them to Overlay mode. (The original JPG, preserved in the Background layer, was left to Normal mode and was visible underneath all these layers.) The key word when talking about exposure here is subtle. Make subtle changes, or you’ll ruin the shot.

As soon as I adjusted the layers and changed them to Overlay, things looked a lot better. The dynamic range was there, it just needed to be tweaked. So I went in and adjusted the individual exposures for each layer some more to make sure parts of the photo weren’t getting washed out or ended up too dark. Then I threw a couple of adjustment layers on top for levels and colors. Finally, I duplicated the three layers and merged the duplicates, then used the smart sharpen tool. The adjustment layers were now on top of it all, followed by the merged and sharpened layer, and the three exposure-adjusted layers, which were no longer needed, but I kept them in there because I like to do non-destructive editing. Here’s the end result, exported to a JPG.

Brook, processed

This is the sort of post-processing that pleases my eye. The details were preserved, the colors came out looking natural yet rich, and things look good overall. Even though some spots are a little overexposed, I like it and I’m happy with it. Let’s do a quick review. Using my own WCR/HDR-like method, I accomplished the following:

  • Used a single RAW/JPG exposure
  • Didn’t need to use a tripod, could shoot handheld
  • Didn’t need special software, other than Photoshop
  • Achieved the dynamic range I wanted
  • The photo looks natural, at least to my eyes
  • The post-processing was fairly simple and took about 30 minutes

There is one big difference between my WCR method and the usual HDR post-processing. Done right, the latter will help bring detail out of the shadows. Because of that single or multiple exposure done at +2 EV or more, spots that would normally be in the dark in a regular photo can be seen in HDR. Not so with my method. Here the darks become darker. The atmosphere thickens. The highlights become darker as well. The whole shot gains character, as I like to call it. So this is something to keep in mind.

Just to clarify things, the image above was the first result I obtained using my method. There was no redo. I then processed some more images, and got a little better at it. It’s worth experimenting with the Shadow/Highlight options for each individual layer. It helps minimize blown-out spots. It’s also very worthwhile to play with the Filter tool for each layer. This really helps bring out some nice colors. It’s sort of like taking three exposures of the same scene with different color filters. The results can be stunning if done well. You also don’t need to use three overlays. It all depends on the photo. Some photos only need one overlay, while others need four or five. Subtle changes in exposure can help bring out areas that are too dark. You can see some photos below where I used my own advice.

Brook, take two, processed

Meeting of the minds

Parallel lines

There you are

I hope this proves useful to those of you out there interested in this sort of post-processing. It’s my dream to see more natural and colorful photos, regardless of whatever post-processing method is used.

Standard
How To

Hacking the GN calculations when using manual flash

Here’s how to hack it when you’re stumped as to what guide number to use with a manual flash. This is useful when you’re using an analog SLR that won’t sync the flash power automatically, or you’ve got a DSLR and want to fine-tune the amount of light the flash puts out. I can’t stand having to calculate this with formulas. We all may have seen this :

Aperture (f-stop) = GN X ISO/distance (in meters)

But do any of us know it by heart, or better yet, want to know it? And are we really going to take out a tape and measure the distance to the subject? I know I don’t feel like it. So how can we hack this? Well, we use what knowledge we have to ascertain the flash power we want, and then we adjust the GN (Guide Number) up or down. It works like this:

  • Higher GN means more power for the flash and consequently, more light
  • Higher f-stop means smaller aperture, and that translates to less light coming into the camera
  • Higher ISO means better sensitivity to the light that the aperture lets in
  • Higher distance means less light (remember, we’re using a flash, and the effective distance is limited)

So, what does this mean for us? Simple: we can adjust any of the four factors listed above to get the photo we want. Need more light? Boost the GN and/or the aperture. Can’t get more light, but want a better photo? Boost the ISO, but recognize the photo may be grainy. Can’t boost ISO? Decrease the distance between you and the subject.

Of course, keep in mind that when you boost aperture (choose a lower f-stop), you’ll decrease the depth of field. Think of the focus field as a loaf of bread. When you use a small aperture (large f-stop, 16 for example), you get the whole loaf in the shot. When you use a large aperture (small f-stop, 1.4 for example), you get only a slice in focus. You can effectively think of f-stops as slices of that loaf of bread. Larger f-stops means more slices. So if you’ve got objects in this photo of yours that reside at various points of focus (more slices), to keep them all in focus, you’ll need to keep the aperture fairly small (large f-stop). If you’re only interested in a particular object, by all means, increase the aperture (small f-stop), get more light that way, and use a lower GN. You’ll get more natural colors. Flash light can be harsh and wash out the nuances if overused, so the less you can use, the better off you are.

Don’t think I’ve forgotten to talk about shutter speed. Just realize that you won’t have too much flexibility there, in particular if you’re shooting handheld. Even with a tripod and manual flash, you can’t adjust the speed that much. Too slow, and any people in the photo will be blurry. Plus, the flash will be ineffective. It can’t stay lit for several seconds or more unless you use a bulb. Too fast, and you won’t get any light. Plus, if you’re syncing the flash with the camera, you’re limited by the top sync speed, which varies by camera and usually runs from 1/180 to 1/250 seconds. You’re better off playing with the other variables in the equation.

Remember, you don’t need to go to the trouble of using manual flash unless you have to. If you need to adjust flash intensity and your camera allows it, you can easily boost or decrease it through simple menu functions. Just look this up in your camera’s manual. You can usually use the +/- button, if your camera has one.

Hope this helps!

Standard