Any sensor used to capture video motion is essentially a video camera. They consist of mechanical and electronic parts. When evaluating systems the first element to look at is the lens. The lens is important, because it is a gateway for all the light passing to the sensor. There are different kinds of lenses depending on the look wanted and to make cameras more versatile the parts are interchangeable as standard mounts are used (C-mount, CS-mount).
The second equally important part, the sensor, is essentially an analog to digital converter. It transforms the the rays of light into digital values of generated electronic current. In practice different frame grabbing technologies are used, but CCD and CMOS sensors are most common. More sensitive the sensor, the better. Currently the most advanced sensors can reach 4k and 8k which translates to 4329 or 7680 pixels lines in the pictures. Even higher resolutions can be reached if one with more transistors is used. The device can have a larger area or transistors maybe smaller.
New industry standard will be high definition in 720p or 1020p. Thus 4k and 8k (where k is multiple of 1020) sampling will be more than satisfactory. However what is more important is that the analog to digital converters use different amount of bits to describe the signal values. Broadcast quality sensors are either 8, 10 or 12 bit making them better at capturing shades of each color. Sometimes cryptic number sequences 4:2:2, 4:4:2 and 4:4:4 refer to the sampling frequencies the sensor picks up for red, green and blue colors.
Stream of jpeg images from a 1080p sensor (1920*1080) at 50 fps amounts to something like 40-50 Mbit/s of raw data. Any logic components should be connected directly to the sensor and thus have to be able to cope with this kinds of data rates. It is often easier to make comparisons from raw data than from compressed material that would need to be uncompressed before analysis. Even though all digital imaging professional are aware that the original format should always be the largest possible some compression has to be applied. A future codec called AVCHD manages to compress the signal into measly 18 Mbit/s.
Network interface that connects the sensor to the storage device depends on the length of the connecting cable. In case of a shoot on location fibre PON solutions are optimal and inside a camera standardized solution something like Camera-Link would be the optimal solution. Often used solutions are SDI, HD-SDI and HSDL (High Speed Data Link) cables and can be used to carry raw SD and HD signals accordingly. HDV was based on MPEG-2 while AVCHD is MPEG-4 based, but both are lossy codecs. For editing purposes raw RGB footage would be best as no information would need to be lost.
The advances in sensor technology has the potential to change the production work flow quite a bit. One day the output resolution from a sensor will be large enough to allow us to do panning and cropping in the post-production. Starting off with 16k images 8k resolution off-line edited video material can be generated. Cameras are just brought into the situation and best guess will be good enough for excellent framing.