31.5.08

End Note.

Now the printings done. It's not really the end as there is so much more to be explored beyond these simple experiments. These outcomes are a disparate shift from some of my early ideas. The real key to getting things done appears to have been to follow one element of an idea. I envisaged that my end show would be a display of screens taking a live streams from the university cctv infrastructure constructed in the fashion of quantum time, using some undecided interface that would allow you to navigate time. What I in fact will (or possibly not) be displaying is two static images of some urban space.
Quantum Time - Static TimeLapse (2006 - TV Test Image)

Scan TV (Temporal distortion implicit in method)

A common theme throughout any idea has been the use of software to generate the image at some stage. From Quantum Time; a small command line app to the use of Photomerge. A major factor to changing the route taken was the realization of the amount of up front learning that would be required to become proficient in Processing and implement even a non-live, static version of Quantum Time that used a pre prepared image/data set. Given the time constraints, software became the product of what i could learn and manipulate enough to achieve a given goal. In all honesty i should have pushed myself to write a least some code as this would have offered me far more insight and allowed me the kind of control and manipulation i crave.
I have some rather strong views on the educational aspect of these endeavors. There is no education. Higher Education is a misnomer, most of the time is spent in a self explorative state. At first this sounds valuable, through self exploration you will learn. This view is harmonious with the vision of artist hero, and no doubt some theories of learning. The problems lies with the framework you explore within. I remember way back when i was looking at uni's. A big debate back then was 'House Style' the idea that the institution was a production line for its own ideology. I found this rather offensive and focussed on places that offered freedom and exploration.... little was i to realize that a strong vision or process is in-fact a massive asset. What creates a quality institution is its commitment to a conceptual vision, imparting that to it's student to explore or resist. This seems far more valuable that floating around in a tired, banal postmodern irony. The true irony being that my institutions lack of vision is what i have resisted. What is required is progressive program that embraces the risk and uncertainty surrounding modern communication, combined with a strong focus on the vocational requirements of working in a world made of data. No tall order compared to burying your head in the sand and becoming another degree vending machine.
Movement over the duration of the scan. It's a bit of a tired technique and easily created in software from a video stream.

Where Next. More to come on that soon.....

21.5.08

Final.

These are the final images bar some editing to correct colour and contrast etc.... Not sure which to print yet.

19.5.08

Problems. Part Two

This first image was to signal the beginning of my home run with the creation of the final project images. Instead all that appeared was an image saturated with infrared. Even with the infrared filter in place the bellows appeared to be leaking and reducing the image to a red, contrast-less mess. No matter what i placed over the bellows i could not shift the infrared. Too add to this the power supply in the car cut out even though it states its good for 600w. There was no way i was drawing that much current running a laptop and the scanning back. I went home. Trying to recreate the infrared in controlled conditions appeared to be impossible. Even with a few lights against the bellows and a flash. From this i could only conclude that the colours shift was a mix of sunlight and the power supply. Im aware that scanning backs are very sensitive to their power source. But scouring the forums i could find little information of these issues.This image from today is a near success. As this project has moved along and become less about some interactive digital output from the camera, i have become more concerned with making images that will print well. This image suffered from no infrared problems but was cut short by the power supply failing again. It needs some balancing work to adjust from a rather large variation between the two exposures. Not really sure what it's of. I'll let people do the reading. Im just trying to do the making.

17.5.08

Milo Action

If only my uni would give me access to the snazzy new student attracting tech. Obviously my use for the Milo motion control arm was so abstract as to fall on deaf ears. But had i gained that access i would have been producing something very similar to this from Carnegie Mellon. My version was to be geared towards the production of a super wide field of view image with infinite depth of field with the potential to be rendered into a 3d environment for inspection/interaction. I wouldn't normally mention such things but it irritates me as Im so close to something that is out of reach to all but a precious few. Outside of the common understanding of what these machines are designed for there are thousands of exciting possibilities, only limited by the red tape that surrounds such fun toys. Calling it a toy is enough to reject you from a legitimate conversation about how you might want to use it. At least we can live safe in the knowledge that innovative uses will win over the banal as soon as it is junked or stolen.

Techno.

The technological transitions that fascinate me are one side of the process. Peoples use of images is what is going to define them beyond the current paradigm of Photography.
Computational Photography i suppose can be said to be a branch of photography that is primarily research based. Its more the science of image creation. Most of the research has obvious commercial applications and can be placed in the context of photographic progression, like increses in quality and automation. But the field also offers an insight beyond the commercial, and insight into the changes at the core of the medium.
In terms of creating images with extended depth of Field their are several notable methods that mostly rely on encoding additional data into the image for post processing. These manifest themselves as a mixture of hardware modification and software.There is a passing similarity to Dolby and the methods employed to transpose the audio signal into the optimum frequency range of a given tape or RIAA tone curves that perform a similar task on vinyl.
Accept these modern methods of encoding create additional data that can be used to manipulate the image. This video is a reasonably comprehensive overview of the key methods involved.

This work from Stanford using a large array of cameras is just amazing. Im particularly impressed by the Synthetic Aperture video near the end.
EDIT: As per request. A link to a hi-res version of this video

13.5.08

Gefeller

Andreas Gefeller - another veteran of Bernd and Hilla Becher teaching at the Düsseldorf Art Academy. Who's distinctive pedagogy and conceptual vision of a comparative taxonomy of utilitarian structures has become a standard in art photography. Gefeller's images are overt composites that upon closer inspection reveal that they are pure digital manipulations with obvious joins and blends.
In this sense I find them much like the use of blocky, and poor resolution fonts within in graphic design acting as obvious motifs for 'digital'. This shorthand screams at you to question the age-old question of reality and representation within photography. But it does more than this; it forces observation of built environments. In fact they remind me of Intelligence Photographs, revealing paths and human interactions on an environment from a odd and totally unnatural perspective. Of course their are many visual parallels within contemporary photography, im thinking Hockney's 'Joiners' and Gursky's vast, super detail composites. Im not really sure where to place his work in relation to mine. Im jealous of his education, why am I not under the thumb of someone with a vision. Even if their vision were not mine at least I would have something to work against. Instead Im at the soft end of a triple expansion engine, its hard to place your self in opposition or sympathy actively. Or at least I find it to be. I don't really need to place his work, i just need to keep exploring without solid guidance. The obvious fear being following desire will lead to nothing.

'Desire is always for the past, for the lost infantile completeness. Desire is always about our sense of lack'

7.5.08

Problems. Part One

The scanning back has presented a set of problems that I thoroughly underestimated and to add to this my conceptual resolve is under significant strain also. I'll start with the technical issues first. All my testing has lead to one thing becoming very obvious. With the increased effective sensor size their has been a dramatic loss of depth of field. This is something I really over looked even though I have experienced it. This .pdf outlines the concepts and mathematics involved clearly. So even with fewer exposures required to cover the image circle of the lens there is an increase in the number of images required to create a fully focused image-by-image stacking. The common adage of stopping the lens down to clasp back some depth is incredibly mis leading. Whiles this methodology works to an extent it also at some point gives way to lens diffraction that’s begins to soften the entire image. And in the case of my 210mm lens even at a reasonable aperture of f32, focused at 10ft I only have 6" of depth in focus. At 20ft the depth of field rises to around 2ft. The implications for image stacking are hideous. To cover an image like the Basement is going to take at least 5 scans and that’s pushing it. I have attempted to increase the perception of depth of field via some delicate sharpening but for any perceptual gain it is soon lost in the layer stacking process that just blurs the poor transition from to out. This test below show all the new problems. Made using 4 layers per camera movement at around f16 with each scan taking about 8 minutes. The transitions between each layer that are out of focus and have no data thats in focus come out blurred just like with the dslr. Another issue that has occurred now, is that at the extremes of the camera movement there is some serious lens distortion. I was aware of this with the dslr composites but as the BetterLight back can be moved beyond the image circle the effect becomes incredibly obvious. The use of the dslr had limited the sensor ever getting to these distorted areas. This brings me to the field of view. Now that the 90mm lens is in commission I have been testing out what it can do. Having a wider field of view (100 deg) compared to the 210mm (72 deg) means it benefits from a compressing effect allowing for a potentially greater perceptual depth of field. This comes at the cost of some distortion and a positively miniature projected image circle of 216mm. The smaller imaging circle turns into fast light loss and crazy distortion. So it’s not really a solution for large composites....
But their is hope. Having tested my lenses solidly for the last few days I have been able to calculate the highest aperture that I can use before the diffraction cannot be mitigated with sharpening. The rather mundane result below has depth of field from as far as the hilltops to the boot of the silver car in the foreground. All these issues with depth of field are very landscape people type issues. Of course the methods im using to create these images are dated and by no means innovative. The real development and exciting stuff is being researched and explored in labs. Methods for gaining more depth of field are a popular area for Computation Photography. Two of the most accessible methods are Coded Aperture and Lightfield. Both require a change in the hardware of a camera to work but all use software to resolve their final output.

2.5.08

Scanning Joys

Today the scanning back arrived. Not without trouble though as Customs insisted on delaying the delivery for an inspection and then charging a small fortune in tax. But once in my hands the back was amazing. Today was mainly spent getting accustomed to the Viewfinder software that controls the camera. It's not the most intuitive, but once understood it is actually very sensible. Probably my favourite feature has to be the focus assist. You place a small card patch in the image that has a pattern on it, then select this patch in the preview scan. The program shifts the scanner head to those pixel locations and offers a live output. This is displayed as a r,g,b scope that has a sharp peak when in focus, not dissimilar to contrast focusing used by cheap point and shoot cameras. This will translate into perfectly repeatable focus points that will entirely remove focus errors and patchy focused images. The test image below is the first time I have been able to use my 90mm lens as previously the Dslr sensor could not get close enough to the plane of focus. It has a wider field of view than the 210mm and is considerably softer but now usable. The image was created from six scans with a healthy overlap. It's not focus stacked and it is not using all the image circle available but it is a spot on proof of concept..As I’m now ready to begin shooting the final images, I have to confront myself about what and where I’m going to image. I’m tempted to re-shoot the basement image and possibly the carpark but I’m still avoiding the issue.