Underwater video
Video is Latin for "I see”. In this context, the term video refers to the technology of (electronically) capturing, sometimes manipulating, storing and/or broadcasting, and finally reproducing a sequence of still images at a rate that is perceived by the brain as being in motion.
Definition of Underwater video:
Underwater video is when video technology is used in water (or other fluids) in marine or industrial environments.
This is the common definition for Underwater video, other definitions can be discussed in the article
|
Video imaging in wells and boreholes is similar to underwater video, but puts constraints on the shape and size of the equipment, as does for example underwater video in sewer pipes, nuclear power plants or fish tanks.
Sometimes the term video is referring to the actual equipment (video recorders, cameras) and/or video cassettes or even the recorded content.
Contents
History
The first attempts in the field of underwater imaging were made with a pole mounted camera in the 1850s by the British William Thompson, and several successful attempts were made over the next decades. The first published results from an underwater camera are from 1890 and were made by the French naturalist Louis Boutan [1] who developed underwater photography to a useful method, inventing the underwater flash and other equipment. Photographic techniques, including cinematography, were used exclusively for many years, as television at that time was at its very earliest development stage.
Underwater video has existed since the 1940s. The first published results are by Harvey Barnes in Nature 1952 [2] , but it is mentioned in the article that the Admiralty made successful attempts before that, and that Barnes himself started development of the method in 1948.
Applications
Since then, underwater video has been used for many purposes. The references given are not selected to be the first published results (although they may be), but only given as examples and starting points for a selected few applications.
From the start, underwater video has been used for marine biological studies. It may be abundance of a species [3] [4] behavioral studies [5] [6] habitat mapping [7] [8] studies of fishing gear [9] [10] and whether the seabed is damaged or not by them [11] [12] even in combination with a water sampler [13] and to separate living corals from dead [14].
It has also been used for marine geology [15] , sediment studies [16] , tidal microtopography [17] , bridge [18] and pipeline [19] inspections, sports [20] , marine archaeology [21] , entertainment, education and more.
The reasons for this widespread use are several. The most viable alternative to underwater video for making visual observations (if you want moving pictures!) is a to be a diver or to use a waterscope. Both these methods have limitations regarding depth, observation time, temperature, accessibility, documentation procedures etc., that makes video superior in many of not most cases.
A bibliometry made in 2000 [21] shows that the number of papers in the study on underwater video peaks in the mid 1990s. The reason for this is probably that before this, the equipment was expensive and bulky, and thus not very apt for underwater use. The evolution of electronics made the video equipment small (and cheap) enough for widespread use in the 1990s, and many novel applications were reported. Today, papers about video technology per se are not as numerous – not because video is not used anymore, but because video more or less is a standard method. In spite of this, there are many misconceptions and some confusion about the technology, in particular when it comes to the evolving digital video systems.
Below, an introduction to the use of video and the technology itself is presented. The topics are divided into a short overview where some applications, advantages and limitations of underwater video are discussed, and a more technical part where some engineering fundamentals of video technology are explained.
Pros and cons
For any mapping method there is a tradeoff between resolution, coverage, labor intensity and information content [22], see figure 1, where some video methods are compared to other. You may notice that video performs well in terms of resolution and information content, not so good when it comes to workload and areal coverage.
Figure 1: Video methods compared to other methods. Modified after [22].
One obvious advantage of video is, that you can use your most capable perceptional system – the vision. What you get is what you see. As opposed to other imaging methods (for example acoustics) you can see colors, shapes etc. (mostly) the way you are used to.
The cost of a simple video system is nowadays not prohibitive. It is mostly non-intrusive and non-destructive; one exception is for example the REMOTS sediment profiler, vertically slicing the sediment-water interface and viewing the sediment in profile [23]. Another advantage is that it is easy to communicate results to both peers and to non-specialists.
The most prominent limitation on the use of underwater video is visibility, or rather the lack of underwater visibility. Lighting conditions, scattering particles in the water, the water itself, reduce the visibility (in most practical cases) to a range of a few tens of meters, often less. Due to this (and camera resolution limitations) relatively small areas are imaged compared to for example side-scan sonar.
The evaluation is obviously biased towards visual features but studies using ultraviolet light are reported [24], and although infrared light is rapidly attenuated in water it has reportedly been used for illumination [25].
The sometimes labor intensive evaluation of video material can be considered a disadvantage, and there is sometimes a risk of inter-observer biasing that should be considered and addressed if several observers are working together.
Underwater video systems
A basic underwater video system consists of a camera in a watertight housing being moved around on a carrier, and a way to transmit and/or record the pictures to the viewer where they are reproduced. These components are discussed briefly below.
Camera
The camera consists of a lens, a light-sensitive element and electronics. A pinhole camera without lens is not practical, because of the light conditions underwater. A camera is sometimes combined with a recording device, and is then called a camcorder.
A camera is characterized by its lens (focal length), the resolution (in pixels or TV lines) and its light sensitivity (lux), but also by the image sensor type.
The lens is typically wide angle for stand-alone cameras, while the standard zoom lens is used on camcorders. To achieve a wide field-of-view or close-up focus, a negative lens is often attached to the lens.
Today, the image sensor in the camera is almost always of the CCD (Charge Coupled Device) type, although some systems may have a CMOS sensor that is cheaper due to manufacturing costs. A CMOS sensor normally has low resolution and is less light sensitive, but is less sensitive to over-exposure (blooming) and can be chip-integrated with the drive electronics. It is also faster, and often seen in high-speed cameras.
Some low-light cameras may still have a so-called Silicon Intensified target sensor (SIT) but the performance of CCD cameras is today surpassing them.
Housing
Obviously, an underwater video device must resist water and pressure. Usually this function is achieved by putting the camera in an underwater housing. The housing is basically a container. The purpose of this container is to protect and support the components in the camera. As the camera (especially if it is of the camcorder type with an internal tape recorder) is fragile, it is essential that the housing is sturdy as well as resistant to pressure, as well as chemical and mechanical forces in the environment where it will be used. The housing always has a design depth, that should be respected.
If the camera will be handled by a diver, it is also important that it is easy to operate (focus, tape control, viewfinder…), and that the weight can be modified to achieve the desired buoyancy. Other applications may put restrictions on size, weight etc.
If the camera is a drop camera, ROV-camera or similar, the housing usually contains a power regulating device and cable driver electronics to minimize degradation of the video signal. Sometimes the housing is integrated with motors or servos to move it without having to move the platform it is on. It may also contain illumination.
The most simple way to protect a camera from intruding water is to put it in a sealable soft bag (essentially a plastic bag made of thick soft plastic) with a clear, flat window (port), often made of acrylic glass or glass, for the lens. Such a bag will provide good protection from rain, dust, salt spray and splashing water. It can (if the manufacturer says so) be used underwater down to a couple of meters, but the camera controls are often difficult to operate because of the water pressure on the soft bag, as you operate the camera by simply pushing through the plastic.
A more elaborate housing for a diver operated camcorder (a camera with an integrated recorder) is almost always customized to the particular camera model used. Today, mini-DV camcorders are frequently used, but there are of course housings for professional cameras available too. For small depths (<50 m) the housing is generally made of plastic, sometimes aluminum. Professional housings for small depths, and those for larger depths are made from aluminum, titanium, and/or other high strength materials.
There are a number of companies providing underwater housings for video cameras. Very rarely, two camera models even from the same manufacturer share size, shape and locations of the controls, so a new adaptation of the housing has to be done for every model. Usually it is molded to the shape of the particular camera model, and fitted with a number of mechanical buttons and levers that penetrate the housing, allowing the diver to operate the camera controls directly. Usually fitted with a flat port, there are also models that provide a dome port (se below).
To change tape and/or charge the battery pack on a camcorder the housing has to be opened. If possible, avoid opening the camera at all while in field, where water spray can reach it – always work on a clean, dry place if you can.
Whenever a camera housing is opened, note that different models may require special procedures to be followed when the housing is dis- or re-assembled, i.e. levers in a certain position, cables removed in a particular order, etc. Read the manual before you start! You can never overstress the fact that most floods are caused by mistakes or lack of due care when the housing is assembled, in particular related to the O-rings, the sealings that constitute a barrier between the camera and the water.
Dome port and flat port
The clear glass- or plastic window in front of the camera lens in an underwater housing is called the (lens) port. There are basically two types, the flat port or the dome port.
Due to differences between the indices of refraction for light for water and air, a flat port will increase the focal length of the lens by approximately 25%. It will also distort and/or blur the image, more near the borders of the imaged area. This radial distortion is more pronounced for large apertures and short focal lengths (wide angle lenses). Sometimes a phenomenon called chromatic aberration is seen as a loss of sharpness and/or color fringes on edges in the picture; again more pronounced for large apertures and short focal lengths.
To overcome these imperfections, the dome port was invented, The dome port is shaped as an arch, rotated 360 degrees on its vertical axis. This cupola shaped optical window will, due to the refraction at the water/optical window/air interfaces, make it act as a diverging (negative) lens, that not has the distorting properties of the flat port.
The curvature of the dome results in a virtual image being created just in front of the dome port. The camera inside must focus on this virtual image (a few centimeters away), not on the object itself, and the camera lens is almost always fitted with close-up lenses to facilitate this.
However, flat ports are much easier to manufacture, and thus cheaper, and the limitations are for that reason tolerated in many applications.
Carrier
A video camera can of course be fixed to an object or sitting on the seabed on a frame, but more often it is moved around by a carrier. The carrier can be a diver using a helmet camera or a camcorder. For the cases where a diver is impractical or impossible to use, there are a number of more or less successful carrier designs.
The camera can sit on a frame that temporarily is placed on [26] or just above the bottom. If the camera hangs in a wire it is often called a drop camera ; sometimes such a camera is remotely controlled (PTZ – Pan Tilt Zoom), sometimes not. Cameras can also be put on poles or frames attached to a ship or float, or of course attached directly to a surface or submarine vessel.
For transect studies the carrier can be a platform towed after a ship. It can be towed either as a sled on the seabed, or ”flying” in the water [3, 27, 28]. The towed platform can be actively steered or just act as a hydrodynamic depressor. In the latter case, depth is controlled by paying out or hauling in on the tow line.
The carrier can also be a ROV – Remotely Operated Vehicle – a robotic, unmanned, subsea vehicle that is remotely controlled [4]. Apart from a video camera being used for documentation and inspection, a ROV can be equipped with other tools, such as manipulator arm, sensors, etc. that require realtime vision.
An emerging class of carriers are AUVs – Autonomous Underwater Vehicles – that are similar to ROVs, but are able to operate more or less unsupervised to create for example mosaic images [29].
Somewhat out of the line are the so-called crittercams, attached to animals like seals and whales, fish, and turtles [30]. Maybe the animals in these cases can be classified as carriers.
Transmission and Recording
The video material captured by the camera can be transmitted over a cable or fiber-optic link or over an acoustic link, or recorded. Radio waves are not efficient underwater.
For cable transmissions, twisted pair copper cables or coaxial cables dominate. Long copper cables will degrade picture quality, although this to an extent can be compensated for by electronic circuitry. If several video signals are transmitted through a (multi-conductor) cable, they can interfere with each other (crosstalk). Again, careful design of the system can minimize these problems. Modern video transmission system, in particular in the upper price range in the ROV business, use fiber optics which are less sensitive to signal degradation and crosstalk.
Almost always, underwater video is recorded for archival purposes. Recording has until recently been done almost exclusively on magnetic tape, and this medium is still widely used. Following the general development of video and electronic equipment, it is today becoming increasingly common to record (digital) video on hard disks in computers or in dedicated recording devices, on optical disc storage media (e.g. DVDs), or on non-volatile memory cards.
Viewing
No video system is complete without a method to recreate the moving pictures. This is done on a monitor (or VDU – Video Display Unit)that is adapted to the type of video used in the system. The still common CRT – Cathode Ray Tube – monitors are notorious for their bad performance in sunlight, but indoors or under a cover they still perform well in terms of picture quality. They are however heavy and bulky, and thus not very practical, at least for use onboard a ship.
The CRT has been the standard for monitors from the start, but it is rapidly being replaced by the TFT LCD – Thin Film Transistor Liquid Crystal Display.
A (color) CRT creates an image by firing a scanning beam of electrons at tiny red, green and blue phosphor dots on the inside of the screen. By turning the electron beam on and off, the dots can be made to glow or not, and from a distance they will create a picture.
An LCD use a backlight (or sometimes incident light) as the light source, and the picture is created by controlling how much of this light is allowed to reach the colored dots by selectively blocking or open the path of that light. This “light valve” is made possible by “Liquid Crystals”, that remain transparent unless a voltage is placed across them.
While CRTs have issues with size and power consumption, TFT LCDs have other issues with for example resolution, response time, viewing angle, contrast/brightness and a few more.
The monitor resolution is rarely a problem for (standard) video applications, but sometimes when a monitor is used for video, the number of pixels on the screen do not match the video resolution, creating a blurry picture and/or artifacts. Another issue is that a TFT LCD does not refresh as fast at a CRT which leads to a smearing of the picture (ghosting) and sometimes jagged pixel effects; the response time is higher.
CRT monitors are viewable from almost any angle, while TFT LCDs only produce a good image from inside a certain arc of angles. As the technology improves, this arc is being increased.
Contrast is the range in which brightness can vary between the darkest and the lightest area on the screen, expressed as a ratio ( i.e. 800:1). The higher this ratio is, the better the image quality will be.
Brightness of a monitor, measured as luminance, is the amount of (visible) light leaving the surface of the monitor in a given direction. The light leaving the surface can be due to emission, reflection or transmission. The SI unit of luminance is candela per square meter (cd/m2), sometimes called nits, from Latin nitere, to shine. The greater this number, the brighter the display is capable of being and thus more visible in bright light, e.g. outdoors.
Note that some LCD monitors take advantage of the incident light and are not as affected by sunlight, and that a high brightness level will make the monitor consume more power, introducing a tradeoff between brightness and power consumption, and sometimes create a cooling problem.
Underwater video – technology
Fundamental properties of the video signal
The video signal itself is characterized by a number of parameters. Their actual values constitute a trade-off between the available bandwidth (or channel capacity/data rate for digital signals) and the information content. In information theory it is stated, that the conveyable information content on a transmission channel is directly proportional to the frequency range (or the data rate) of the signal used for the communication. The larger the bandwidth/data rate is, the more information can be conveyed.
Each individual image in a video stream is called a frame. For analog video formats, a frame is specified as a number of horizontal (scan) lines (sometimes called TV-lines, TVL), each with a determined length in time. A digital image is defined as a number of rows of picture elements/pixels, or a matrix of pixels if you like.
A complete description of a video stream is called a video format; this term is sometimes extended to descriptions also of physical media (tapes, discs), transmission details or equivalent.
Many of the parameters used to describe video formats originate in analog video/TV standards and are more or less obsolete in the context of digital video. Since they are widely spread and still in use, it is though still fair to describe them in some detail.
Aspect ratio
The aspect ratio describes the relation between width and height of a video screens (screen aspect ratio) or the individual pixels (pixel aspect ratio). A traditional television screen has a screen aspect ratio of 4:3, wide screen sets use an aspect ratio of 16:9. Computer monitors and digital video usually have screen nominal aspect ratios of either 4:3 or 16:9.
The pixel aspect ratio is related to a single pixel in digital video. The pixel aspect ratio displayed on monitors is usually 1:1 (square pixels), while digital video formats often specify other ratios, inherent from analog video standards and the conversion from analog to digital signals.
As an example, consumer camcorders, often used for underwater video recordings, are often based on a digital video standard called DV (or IEEE 1394). DV is defined with a 4:3 screen aspect ratio, a screen resolution (for PAL) that is 720x576, but with (approximately) a 0.9:1 pixel aspect ratio. This means that a DV video will appear horizontally stretched if displayed on a computer monitor with square pixels, for example in an editing program. There are ways to correct this, either by cropping the image or by re-sampling it, and in practice it is often not important.
Framerate
The number of still images, frames, per second is also known as the frame rate, measured in frames per second (fps) or Hz. If a sequence of still pictures is showed at a frame rate above 10-12 fps or so, the human perceptual system will regard them as a moving scene (The Myth of Persistence of Vision Revisited," Journal of Film and Video, Vol. 45, No. 1 (Spring 1993): 3-12.).
Different video standards have different frame rates. The analog television standards used in Europe (and Australia, for example) (PAL, SECAM) specify 25 Hz; as well as the digital MPEG-2/DVB-T replacing them. Another standard, NTSC, used in Northern America, Japan etc.) specifies 29.97 Hz. Digital formats sometimes allow for arbitrary framerates, where it is specified in the file- or streaming format.
Analog video
In older video cameras (until 1990 or so) a picture is projected by the camera’s optics onto a light sensititive plate in a specialized electronic component called video camera tube. The photons of the projected picture changes the electrical properties of the camera tube plate, more light (more photons) induces a larger change of properties.
By scanning the plate with a focused beam of electrons, moved in a pattern, these property changes can be read out from the camera tube as small current changes. The optical picture can thus (after amplification) be represented by a variation in voltage or amplitude.
It is of course essential to know the way in which the photoelectric plate is scanned – for example when a new frame is being started, the number of scan lines, when a line scan is started, the time to scan each line, etc. These synchronizing elements is indicated in the video signal by certain amplitude levels different from those used for picture information.
If the image is monochrome, these voltage variations, picture as well as synchronization, is called luma signal. Color is added to the picture in a slightly more involved way, and is transmitted by a chrominance (or chroma) signal.
To recreate the picture from an analog signal, another cathode ray tube (CRT) can be used (cf. above). In this case, an electron beam is swept over a fluorescent surface, producing light (emitting photons) in proportion to its amplitude – higher amplitude means more light. By synchronizing the electron beam of the CRT to the one emanating from the camera (cf. above) the picture projected onto the camera tube plate can be recreated.
The number of analog scan lines in a full frame is different for different video standards, but is 625 lines in the European standards PAL and SECAM; not all of these lines are used for image data, though.
Although modern cameras use solid-state image sensors (CCDs or CMOS), the signal emanating from them follows the standards that were established for CRT technology. This creates a number of anachronistic complications that may seem confusing, for example the interlaced lines in PAL and NTSC video.
Interlaced and progressive
Although the perceptional system interprets a sequence of images as motion, we will still see a flickering scene if the image is updated at a rate below 15 Hz or so), and this phenomenon is only gradually decreasing up to maybe 75 Hz, where most people will be unable to see the flicker. To increase the perceived rate of image updates without increasing the bandwidth needed some video systems (notably the ones used for TV broadcasting) use a concept called interlaced video (as opposed to progressive), that sometimes causes unnecessary concern and confusion.
Interlacing is related to how the individual frames are captured in the camera and recreated on the monitor. Consider an image that is composed of horizontal lines. If every line is numbered consecutively, the image can partitioned into two fields: the odd field (odd-numbered lines) and the even field (even-numbered lines). If the odd field is captured/recreated first, then the even field, it means, that the monitor screen has been updated twice for each complete frame; a 25 Hz frame rate is seen a something updated at 50 Hz (field rate). A disadvantage that the technique can create visual artifacts, such as jagged edges, apparent motion or flashing. These artifacts are often seen when interlaced video is displayed on computer monitors or LCD projectors (that are progressive by nature), in particular when played back in slow motion or when capturing still pictures from the video stream
Progressive video formats will capture and recreate all of the horizontal lines in a frame consecutively. The result is a higher (perceived) resolution and a lack of various artifacts
Interlaced video can be converted to non-interlaced, progressive, by more or less sophisticated procedures, together known as de-interlacing. De-interlacing will remove the visual artifacts to an extent, but not entirely and it will sometimes introduce new impairments to the image, such as an apparent blurring.
Digital video
As described above, analog video is a continuously varying value as a function of time – a signal – representing light changes in a projected scene. An analog video signal is continuous in both time and amplitude and (in theory) arbitrarily small fluctuations in the signal are meaningful and carries information.
A binary digital signal is either on or off (high/low, true/false, 1/0 etc), but note that these states are generally represented by analog levels being below or over set threshold values.
To represent the analog signal as binary values, the signal is constrained to a discrete set of values, in both time and amplitude. This is done by a process called Analog-to-Digital Conversion (ADC). In short, this means that the analog value is determined at certain intervals of time (sampling rate) and represented as a flow of binary numbers. The size of the binary number (number of bits) gives the number of amplitude levels possible, the sampling rate limits the frequency content of the digitized signal.
The number of rows and columns in the digital frame depends on the sample rate used. As an example, a PAL frame sampled for DV (a common digital video standard) at 13.5 MHz consists of 576 lines, each 720 pixels long (actually 702 actual image pixels (52 µs), but a part of the horizontal blanking is sampled, too), while the same frame sampled at 6.4 MHz may contain 320x240 pixels.
When converting from analog to digital format, the synchronizing components of the video signal are used to determine what part of the signal to sample, but a digital frame contains only the image information, and the number of lines in the digital frame is reduced when compared to its analog counterpart; for PAL 576 lines instead of 625 if sampled at 13.5 MHz, for example.
Other digital video formats, not originating in analog video, have other sizes. For example High Definition video (as standardized in ITU-R BT.709) can have a picture size of 720 rows of 1280 pixels, the computer monitor standard SVGA has 600 rows of 800 pixels each, etc.
Bit rate
Bit rate is a measure of the channel capacity, the amount of data conveyed over a (binary) digital channel. It is measured in bits per second (bit/s, sometimes bps). More bits per second is mostly equal to better video quality. The bit rate can be fixed or variable, real-time, streaming video often uses a fixed rate while recorded video may be using a variable bit rate. Compression of digital video
Digital video can be compressed, i.e. decrease the number of bits necessary to convey the images – to lower the bit rate. The data compression (or encoding) can be done because the images contain spatial and temporal redundancies that are removed in the compression process.
As a simplified example, consider transmitting ”20xZ” instead of ”ZZZZZZZZZZZZZZZZZZZZ” – a compression rate of 1:5. This of course implies that the receiver knows how to interpret ”20xZ” – a process known as decoding.
Generally, the spatial redundancy is reduced by analysis of changes within a frame (intraframe compression) while the temporal redundancy is reduced registering differences between frames (interframe compression).
There are a number of standards for video compression, DV and MPEG-2 (used for DVDs) are just two. The scheme or algorithm for the encoding/decoding is often disguised as a small computer program-like plugin (codec) that fits into a larger framework (container file format). For example, the Microsoft format AVI is a container format, where many different codecs can be used. There are other well known container formats including Apple’s QuickTime, DMF and RealMedia.
A compression algorithm, and hence a codec, is a tradeoff, emphasizing different aspects of the compressed video: It could be color, detail resolution, motion, file size or low bitrate, ease of (de-)compression, etc. There are hundreds of codecs with different qualities available.
Unfortunately underwater video often does not compress well. Consider a typical underwater scene, where the camera is moving across a seabed covered with vegetation. The scene itself contains few areas with similar properties, there are many details, changes in light and color, and so on; there is no blue sky covering 50% of the picture. Intraframe compression is therefore not too efficient. Since the camera is moving, the entire picture is updated between frames which hampers interframe compression. How this ”incompressibility” is seen in the result depends on the actual codec used. It may be seen as larger files, a lower framerate, flattening of the colors, artifacts or loss of resolution, etc.
There is really no way around this but to increase the amount of data, that is to use a higher bitrate. In practice, and for standard resolution video, DV or DVD bitrates are sufficient but for the most demanding tasks.
- ↑ Boutan, L. (1893); Mémoire sur la Photographie Sous-Marine; Archives de Zoologie Expérimentale et Générale; 3ème sér., 1, pp. 281-324
- ↑ Barnes, H. (1952); Underwater television and marine biology; Nature, 169, pp. 477–479
- ↑ Smith, C. J., Papadopoulou, K.-N. (2003); Burrow density and stock size fluctuations of Nephrops norvegicus in a semi-enclosed bay; ICES Journal of Marine Science; 60, pp. 798–805
- ↑ Moser, M. L., Auster P. J., Bichy, J. B. (1998); Effects of mat morphology on large Sargassum-associated fishes: observations from a remotely operated vehicle (ROV) and free-floating video camcorders; Environmental Biology of Fishes; 51, pp. 391–398
- ↑ Grémillet, D., Enstipp, M. R., Boudiffa, M., Liu, H. (2006); Do cormorants injure fish without eating them? An underwater video study; Marine Biology; 148, pp. 1081–1087
- ↑ Esteve, M. (2007);Two examples of fixed behavioural patterns in salmonines: female false spawning and male digging; Journal of Ethology; 25:1, pp. 63-70
- ↑ Ryan, D. A., Brooke, B. P., Collins, L. B., Kendrick, G. A., Baxter, K. J., Bickers, A. N., Siwabessy, P. J. W., Pattiaratchi, C. B. (2007); The influence of geomorphology and sedimentary processes on shallow-water benthic habitat distribution: Esperance Bay, Western Australia; Estuarine, Coastal and Shelf Science; 72:1-2, pp. 379-386
- ↑ Abdo, D., Burgess, G., Coleman, K. (2004); Surveys of benthic reef communities using underwater video; Long-term monitoring of the great Barrier reef Standard Operational Procedure Number 2, 3rd Revised Edition; Australian Institute of Marine Science, Townsville 2004; ISBN0-64232231
- ↑ Zhou, S. Shirley T. C. (1997); Performance of two red king crab pot designs; Canadian Journal of Fisheries and Aquatic Sciences / Journal canadien des sciences halieutiques et aquatiques; 54, pp 1858–1864
- ↑ Cooper, C., Hickey, W. (1987); Selectivity experiments with square mesh cod-ends on haddock and cod; IEEE OCEANS; 19, pp. 608-613
- ↑ Vorberg, R. (2000); Effects of shrimp fisheries on reefs of Sabellaria spinulosa (Polychaeta); ICES Journal of Marine Science; 57 pp. 1416–1420
- ↑ Linnane A., Ball B., Munday B., van Marlen B., Bergman M., Fonteyne R. (2000): A review of potential techniques to reduce the environmental impact of demersal trawl; Irish Fisheries Investigation Series Publications (New Series) No. 7; ISSN0578-7467
- ↑ Dounas, C. G. (2006); A new apparatus for the direct measurement of the effects of otter trawling on benthic nutrient releases; Journal of Experimental Marine Biology and Ecology; 339, pp. 251 – 259
- ↑ Harris, P. T., Heap, A. D., Wassenberg, T., Passlow, V. (2004); Submerged coral reefs in the Gulf of Carpentaria, Australia; Marine Geology; 207:1-4, pp. 185-191
- ↑ Field, M. E., Nelson, C. H., Cacchione, D. A., Drake, D. E. (1981); Sand waves on an epicontinental shelf: Northern Bering Sea; Marine Geology; 42:1-4, pp. 233-258
- ↑ Osborne, P. D., Greenwood B. (1991); Frequency dependent cross-shore suspended sediment transport. 2. A barred shoreface; Marine Geology; 106, pp. 25-51
- ↑ Lund-Hansen L., Larsen E., Jensen K., Mouritsen K., Christiansen C., Andersen T., Vølund G. (2004); A new video and digital camera system for studies of the dynamics of microtopographic features on tidal flats; Marine Georesources and Geotechnology; 22: 1-2, pp. 115-122
- ↑ DeVault, J.E. (2000); Robotic system for underwater inspection of bridge piers; Instrumentation & Measurement Magazine, IEEE; 3:3, pp. 32-37
- ↑ Gracias, N., Santos-Victor, J. (2000); Underwater Video Mosaics as Visual Navigation Maps; Computer Vision And Image Understanding; 79:1, pp. 66-91
- ↑ ">Blanksby, B. A., Skender, S., Elliott, B. C., McElroy, K., Landers, G. J. (2004); An Analysis of the Rollover Backstroke Turn by Age-Group Swimmers; Sports Biomechanics; 3:1, pp. 1-14
- ↑ Coleman D. F., Newman J. B., Ballard R. D (2000); Design and implementation of advanced underwater imaging systems for deep sea marine archaeological surveys; OCEANS 2000 MTS/IEEE Conference and Exhibition;1, pp. 661-665