![]() I have suggested to several manufacturers (Kaiser, Negative Supply, etc) that they should make one, but have yet to see any. What I haven’t seen are good, commercially available RGB light sources that are optimized for film reproduction (other than, of course, re-purposing iPhones and iPads, which are less than ideal for other reasons). Yes, this is something I have talked about doing and am planning to do in the future. I propose a new user input for NLP->Lightsource used Speaking of old and faded film, I have no experience with it, sorry. The fact that manual inversions can look perfect tell me that a high-CRI light source is good enough for algorithms to work their magic. As Nate gets closer to manual quality with each update, the efficiency improves. The amount of tweaking is significant, but still faster than manual inversion, that’s why I use it. NLP does not produce results like above by default, it requires tweaking. What can be improved is speed and efficiency. Hard to get better than that, and then I use these as a reference for tuning my NLP parameters. I think eariler I posted this sample of Fuji 400H Pro inverted manually. It is not hard to produce great scans, just get a color target and practice manual inversions - they allow you to get a feel for each emulsion. TBH I am quite happy with everything except speed. And the paper doesn’t care about emulsions. ![]() A good scanning rig needs to behave as a “digital RA4 paper”. I do not think that different emulsions make much difference.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |