Jump to content

Talk:Oversampled binary image sensor

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Kvng (talk | contribs) at 15:29, 25 May 2023 (Assessment (C/Low): Computing, +Photography (Rater)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconComputing C‑class Low‑importance
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
CThis article has been rated as C-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-importance on the project's importance scale.
WikiProject iconPhotography C‑class Low‑importance
WikiProject iconThis article is within the scope of WikiProject Photography, a collaborative effort to improve the coverage of photography on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
CThis article has been rated as C-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-importance on the project's importance scale.

Prior art!

I hope whoever came up with this idea hasn't tried to patent it, given that there's a huge library of prior art - to whit, it's essentially how digital X-ray and Gamma-camera scanners have worked for about the last 30 years. Generally at a much, much lower effective shutter speed (but also far lower incoming radiation intensity), but otherwise the exact same idea.

IE: Activate the sensor for a certain amount of time, collect samples either by making a regular, frequent check (e.g. 1/s) of the array to see which elements have collected a photon and resetting them, or keeping a constant one-at-a-time watch for incoming photons, registering their position and immediately resetting the array (which is certainly what g-cameras do, as they have to determine the incoming ray position to a rather higher resolution than the actual array density by calculating the relative pulse energy contributed by each of the large-faced photomultiplier tubes around the strongest one), shut the sensor off at the end of the exposure period, total up the counts for each array element, then optionally apply a normalizing transform (depending on whether you need maximum brightness even if it amplifies grain and hides changes in overall intensity between shots, or an accurate, high-dynamic-range representation of varying sample intensity).

It's an idea I toyed with whilst working such a machine for a couple years, but other than allowing post-capture experimentation with different balances of temporal sharpness vs dynamic range and bit-depth by a pro photog, couldn't really figure out how useful it would be with visible light material / what the market appeal would be, or really how to make it work properly with the more limited digital photography and data storage technology of the day...

(For that alone, I don't begrudge the developers any patents they seek on the necessary technological upgrades and breakthroughs they come up with in order to make it a reality ... just, yknow, the idea itself is very old news...) 193.63.174.254 (talk) 15:47, 16 March 2017 (UTC)[reply]