Portable digital video camera configured for remote image acquisition control and viewing

文档序号:1784840 发布日期:2019-12-06 浏览:32次 中文

阅读说明:本技术 配置用于远程图像采集控制和观看的便携式数字视频摄像机 (Portable digital video camera configured for remote image acquisition control and viewing ) 是由 L·欧当那 R·曼德尔 M·邓唐 B·宝得利 A·汤姆金斯 K·格甘努斯 K·P·巴恩斯 于 2011-09-13 设计创作,主要内容包括:可穿戴式数字视频摄像机(10)配备有无线连接协议和全球导航和位置确定系统技术,以提供远程图像采集控制和观看。基于<Image he="67" wi="148" file="DDA0002026101400000011.GIF" imgContent="drawing" imgFormat="GIF" orientation="portrait" inline="no"></Image>分组的开放的无线技术标准协议(400)优选用于提供控制信号或流数据到数字视频摄像机,并且用于访问存储在数字视频摄像机上的图像内容,或从数字视频摄像机流传送的图像内容。GPS技术(402)优选用于在数字视频摄像机记录图像信息时,跟踪其位置。在摄像机外壳(22)上具有锁定构件(330)的旋转安装(300)件允许在可穿戴式数字视频摄像机被附连到安装表面时调整其指向角。(A wearable digital video camera (10) is equipped with wireless connection protocols and global navigation and position determination system technology to provide remote image acquisition control and viewing. The packet-based open wireless technology standard protocol (400) is preferably used for providing control signals or streaming data to the digital video camera and for accessing image content stored on or streamed from the digital video camera. GPS technology (402) is preferably used to track the position of a digital video camera as it records image information. A rotational mount (300) with a locking member (330) on the camera housing (22) allows the pointing angle of the wearable digital video camera to be adjusted while it is attached to a mounting surface.)

1. An integrated point-of-view digital video camera, comprising:

A camera comprising a lens and an image sensor, the image sensor capturing light that propagates through the lens and is representative of a scene, and the image sensor producing image data of the scene;

A processor that receives the image data directly or indirectly from the image sensor;

A wireless connection protocol device operatively connected to the processor to transmit image content to and receive control or data signals from the wireless connection enabled controller via wireless transmission; and

Firmware operating on the processor and configured to generate image content corresponding to the image data representing a scene for wireless transmission to the wireless connection enabled controller.

2. The digital video camera of claim 1, wherein the processor is configured to adjust a set of camera parameters, and wherein firmware operating on the processor adjusts selected ones of the set of camera parameters in response to control or data signals received from the wireless connection enabled controller generated in response to image content received by the wireless connection enabled controller.

3. The digital video camera of claim 2, wherein the set of camera parameters includes illumination and color settings.

4. The digital video camera of claim 1, wherein the wireless connection enabled controller comprises a smartphone or a tablet.

5. The digital video camera of claim 1, further comprising a global navigation and position determination receiver operatively connected to the processor to provide a time standard that supports timing of transmission of image content over wireless transmissions and timing of reception of control or data signals over wireless transmissions.

6. The digital video camera of claim 5, wherein the global navigation and position determination receiver is of the GPS type.

7. An integrated point-of-view digital video camera, comprising:

A camera comprising a lens and an image sensor, the image sensor capturing light that propagates through the lens and is representative of a scene, and the image sensor producing image data of the scene;

A processor that receives the image data directly or indirectly from the image sensor;

A global navigation and location determination device operatively connected to the processor to provide location tracking of the digital video camera as the image sensor generates image data of the scene; and

Firmware operating on the processor and configured to generate digital video camera information associated with image data corresponding to a representation of a scene.

8. The digital video camera of claim 7, wherein the navigation and position determination device includes a GPS component.

9. The digital video camera of claim 8, further comprising a housing including a curved optical support barrel covering the lens; and wherein the GPS assembly includes a metal ground plane positioned between the GPS antenna and the GPS receiver module, the metal ground plane having a shape complementary to the curved shape of the optical support cylinder to provide increased ground plane area conforming to the shape of the optical support cylinder.

10. The digital video camera of claim 9, wherein the GPS antenna is a passive patch antenna type.

11. An integrated point-of-view digital video camera, comprising:

A camera housing containing a lens and an image sensor, the camera housing having a length and the image sensor capturing light propagating through the lens and representing a scene having a horizontal orientation;

A mounting mechanism adapted to fixedly mount the camera housing to a bracket having an off-axis orientation relative to a horizontal orientation of the scene, the mounting mechanism including first and second mounts rotatably connected to assume an adjustable orientation relative to each other, thereby providing rotational positional adjustment of the camera housing;

One of the first and second mounts comprises a rotatable plug mount having a circumferential edge and the other of the first and second mounts comprises a base mount having a circular opening and opposing sidewalls with aligned slots formed through the sidewalls, the rotatable plug mount disposed through the circular opening in the base mount and the circumferential edge of the plug mount and the opposing sidewalls of the base mount having circumferential serrated edges and serrated receiving edges in matable relation for bi-directional ratcheting rotational movement of the plug mount when disposed through the circular opening in the base mount; and

A slidable locking member sized to fit within each slot in the base mount and to slidably extend through and project out of any one of the side walls when inserted in the slot, the locking member having a locking end piece with a serrated surface configured to engage the plug mount and thereby lock it in place.

12. The digital video camera of claim 11, wherein the plug mount is configured for connection to the camera housing to provide linear positional adjustment of the plug mount along a length of the camera housing.

13. The digital video camera of claim 11, wherein the mount is configured to couple the bracket to provide rotational positional adjustment of the camera housing about a mounting axis of rotation that is transverse to a length of the camera housing to enable the digital video camera to be set to a desired viewing angle.

Technical Field

The present disclosure relates to point of view (POV) cameras or camcorders, and more particularly, to an integrated hands-free, POV action sports (POV action sports) camera or camcorder configured for remote image acquisition control and viewing.

Background

First-person cameras are a relatively new product category that has been adapted to capture POV video in a hands-free manner by extreme sports enthusiasts. Conventional first-person cameras mainly include a lens that must be tethered to a separate digital video recorder or camcorder. Fig. 1A and 1B present diagrams of a prior art first-person camera requiring a tethered lens approach to capture a first-person video recording. FIG. 1A presents a Twenty20TM device and FIG. 1B presents a VisortTM device. Fig. 1C and 1D present diagrams of a prior art camera tethered to a camcorder for implementing a tethered lens method to capture a first person video recording. Fig. 1C and 1D present a samsung (samsung) device.

These products are not generally hands-free products and consumers have adopted their own unique mounting techniques to allow "hands-free" video recording of limited athletic activity. Fig. 1E presents an illustration of a tethered camera attempting to facilitate hands-free POV video recording. Figure 1E presents a blackeye device. More recent devices have attempted to transmit image data from "tethered" cameras to a separate camcorder via IR signals to eliminate the tethered cables.

Recently, integrated hands-free, POV extreme motion cameras have become available. Fig. 2A and 2B present diagrams of two prior art products that implement an integrated solution for first-person video recording. These products are still in the starting phase and may be difficult to use.

disclosure of Invention

A preferred embodiment of a portable digital video camera or camcorder (hereinafter collectively referred to as a "video camera") is equipped with Global Positioning System (GPS) technology for data acquisition and a wireless connection protocol to provide remote image acquisition control and viewing. A wireless connection protocol, such as a packet-based open wireless technology standard protocol, is used to provide control signals or streaming data to the wearable video camera and to access image content stored on or streamed from the wearable video camera. Performing intelligent frame analysis on image content enables simultaneous picture setting optimization for one or more cameras to achieve multi-angle and three-dimensional video. A GPS receiver integrated in the video camera enables tracking of the position of the video camera as it captures image information. The GPS receiver enables periodic acquisition of position once every few seconds with high accuracy to aggregate video and mapping. Including GPS technology, introduces a new level of context to any video, making location, speed, time, and ambient conditions as important as the recorded scene. The GPS function makes it easier to capture video within an action and share it on the network in a short time. For example, a user may view a landscape extending along any mountain while tracking progress, speed, and elevation on a map. GPS data, as well as high-definition video images, can be easily edited to organize video content, configure cameras, and post online novels.

Customization of the GPS ground plane and electrical coupling to the camera's housing or other metal improves reception and performance. The ground plane is maximized by coupling it to the aluminum housing in which the camera is located. The result is higher antenna gain and resulting enhanced signal reception when the video camera is mounted in multiple locations.

The video camera is configured with a signal path that allows a separate signal security module to be provided for use with only those applications that require a separate security module. The iphone security module is separately packaged in the shape of a small Subscriber Identity Module (SIM) card.

Simplified mounting of the wearable video camera is accomplished by rotating the horizontal 180 ° so that the video camera can be mounted upside down completely while the picture is held in the correct orientation. Rotation of the horizontal line may be accomplished electrically or mechanically. Swivel mounts with locking features that allow the camera to adjust its angle when attached to a mounting surface use adhesives, straps, or other connection options. The camera housing is equipped with a scissor spring to assist in moving the slide switch activator over a longer range of travel. A user wearing the camera uses a slide switch activator to initiate video image recording.

A portable digital video camera includes a camera housing and a lens.

Some embodiments of the portable digital video camera include an integrated hands-free POV extreme motion digital video camera.

Some embodiments of a portable digital video camera or an integrated hands-free POV extreme motion digital video camera include an image sensor for capturing image data.

Some embodiments of a portable digital video camera or an integrated hands-free POV extreme motion digital video camera include a manual horizon adjustment control for adjusting an orientation of a horizontal image plane recorded by an image sensor relative to a housing plane of a camera housing.

Some embodiments of a portable digital video camera or an integrated hands-free POV extreme motion digital video camera include a laser collimation system having one or more laser sources capable of projecting light emissions to define a horizontal projection axis coordinated with the orientation of a horizontal image plane.

Some embodiments of a portable digital video camera or an integrated hands-free POV extreme motion digital video camera include a microphone and a manually operable switch for controlling one or both of audio and video data capture operations, the switch having an activator that covers the microphone whenever the switch is in an OFF position.

Some embodiments of a portable digital video camera or an integrated hands-free POV limit motion digital video camera include a "quick release" mounting system that can be used in conjunction with a laser alignment system to adjust image capture orientations for pitch, yaw, and roll.

Further aspects and advantages will become apparent from the following detailed description of preferred embodiments, with continued reference to the accompanying drawings.

Drawings

1A, 1B, 1C, 1D, and 1E constitute a set of views of five prior art products implementing the tethered lens approach to capture a first person video recording.

Fig. 2A and 2B constitute a set of diagrams of two prior art products implementing an integrated solution for first-person video recording.

Fig. 3A, 3B, 3C, 3D, 3E, and 3F are front perspective, rear perspective, side, front, back, and top views, respectively, of one embodiment of an integrated hands-free POV extreme motion digital camera.

Fig. 4A is a front perspective view of one embodiment of an integrated hands-free POV extreme motion digital camera showing an alternative positioning of the switch and a representative alternative rotation of the rotary leveling controller.

Fig. 4B is a rear perspective view of one embodiment of an integrated hands-free POV extreme motion digital camera showing a representative alternative number of rail cavities and optional detents within the rail cavities.

Fig. 5 is a cross-sectional side view of one embodiment of an integrated hands-free POV extreme motion digital camera.

Fig. 6 is an exploded view of the mechanical components of one embodiment of the integrated hands-free POV extreme motion digital camera.

Fig. 7 is an exploded view of the optical and mechanical components of an integrated hands-free POV extreme motion digital camera.

Fig. 8A and 8B are partial cross-sectional views of the lens system of the camera of fig. 7, showing a standard lens and a standard lens equipped with a lens filter, respectively.

FIG. 9 is a partially exploded view of the multi-function mounting system illustrating the ease of adjusting the camera mounting orientation and the ease of removing the camera while maintaining the mounting orientation.

Fig. 10 is a front perspective view of a standard mount employing a rail plug having two rails and two detents.

Fig. 11A, 11B, 11C and 11D are rear, front, side and top views, respectively, of a multi-function mounting system illustrating the matable relationship between the cameras of fig. 3A-3E with standard mounts as shown in fig. 10.

Fig. 12 is a perspective view of an alternative mount employing two mounting rails and two brakes.

FIG. 13A is a front perspective view of a pole mounting system employing the mounting of FIG. 12.

Fig. 13B and 13C are cross-sectional side views of the pole mounting system showing an unlocked configuration and a locked configuration, respectively.

Fig. 13D and 13E are front perspective views showing the lever mounting system in an unlocked configuration and a locked configuration, respectively, about the handle lever.

FIG. 14A is a front perspective view of an alternative pole mounting system employing the mounting of FIG. 12 and a harness.

Fig. 14B and 14C are side and front views, respectively, of the alternative rod mount of fig. 14A.

FIG. 14D is a front perspective view of the alternative rod mount of FIG. 14A locked around a rod.

Fig. 15A is a front perspective view of a goggle mount employing strap entry with opposite facing sides of the mounting rail.

Fig. 15B is a side view of an alternative goggle mount employing a strap entry facing in the same direction as the mounting rail.

FIG. 15C is a partial front perspective view of the alternative eyewear mount of FIG. 15B mounted on the eyewear harness.

Figure 16 is a front perspective view of a ventilation helmet mount adapted to be attached to a ventilation helmet using a harness.

FIG. 17 is a front perspective view of another alternative eyewear mount adapted to be connected to an eyewear harness using a harness.

FIG. 18 is a front perspective view of an alternative pole mounting system employing the rail plug of FIG. 10.

Fig. 19 and 20 are perspective and top views, respectively, of a mounting system including a rotating circular rail plug disposed in a base mount configured with a locking function.

Fig. 21 and 22 are perspective and top views, respectively, of the base mount of fig. 19 and 20.

Fig. 23A, 23B, 23C, 23D and 23E are perspective, top, end, side and bottom views, respectively, of a slidable locking member mounted in a base mount such as 21 and 22.

Fig. 24 is an exploded view of the mounting system of fig. 19 and 20 with the attachment mechanism attached thereto.

Fig. 25A, 25B, 25C, and 25D are front perspective views of the digital video camera of fig. 4A and 4B showing the lens disposed in a vertical position with the camera housing rotated 90 counterclockwise, 90 non-rotated, 90 clockwise, and 180 rotated to an inverted position, respectively, relative to the vertical position. Fig. 25E is a front view of the digital video camera in the orientation of fig. 25B, marked with dimension lines indicating the range of angular displacement of the horizontal image plane achievable by manually rotating the rotary leveling controller.

FIGS. 26A and 26B are front perspective and top views, respectively, of the digital video camera of FIGS. 4A and 4B with the slidable switch activator in a record Open (ON) slide set position; and fig. 27A and 27B are front perspective and top views, respectively, of the digital video camera of fig. 4A and 4B with the slidable switch activator in a record OFF (OFF) slide set position.

Fig. 28 is a partially exploded view of the digital video camera of fig. 26A, 26B, 27A and 27B.

Fig. 29A and 29B show perspective and exploded views, respectively, of a GPS assembly including a GPS patch antenna and a GPS receiver module to provide GPS functionality in the digital video camera of fig. 26A, 26B, 27A and 27B.

Fig. 30 is a simplified block diagram illustrating the wireless technology preferably implemented in the digital video cameras of fig. 26A, 26B, 27A and 27B.

Fig. 31 is a flowchart showing pairing of two devices connected by wireless.

Fig. 32 is a flow chart showing one example of pairing an enabled microphone with the digital video camera of fig. 26A, 26B, 27A, and 27B.

Fig. 33 is a flow chart illustrating a preferred camera mounting position adjustment procedure performed by the helmet-worn user to align the helmet-mounted digital video cameras of fig. 26A, 26B, 27A, and 27B.

Fig. 34 is a flowchart showing a preferred manual illuminance and color setting adjustment procedure performed by the user after completion of the camera mounting position adjustment procedure of fig. 33.

Fig. 35 is a flowchart showing a preferred automatic illuminance and color setting adjustment procedure performed by the user after the camera mounting position adjustment of fig. 33 is completed.

Fig. 36 illustrates two of the digital video cameras of fig. 26A, 26B, 27A, and 27B for a common color chart.

Fig. 37 is a flow chart illustrating the digital video camera and the mobile controller device of fig. 26A, 26B, 27A and 27B paired over a wireless connection and cooperating without security to complete the data pass from the second enabled digital video camera.

Fig. 38 is a hybrid flow diagram and illustration of a mobile controller device paired by wireless data and control command connections to two digital video cameras of fig. 26A, 26B, 27A and 27B to implement remote start/stop capability for multiple cameras.

Fig. 39 is a flow chart showing one example of pairing the two digital video cameras of fig. 26A, 26B, 27A and 27B by a wireless connection through a mobile controller device.

Fig. 40 is a block diagram showing a post-processing procedure of synchronizing audio data generated by a wireless microphone and a wired microphone included in the digital video camera of fig. 26A, 26B, 27A, and 27B.

FIG. 41 is a simplified block diagram illustrating the processing of data from a single track of one data source.

FIG. 42 is a simplified block diagram illustrating the processing of data from multiple tracks of multiple data sources.

Detailed Description

Fig. 3A, 3B, 3C, 3D, 3E, and 3F are front perspective, rear perspective, side, front, back, and top views, respectively, of one embodiment of the integrated hands-free POV extreme motion digital video camera 10, and fig. 4A and 4B are front and rear perspective views, respectively, of an alternative configuration and an alternative embodiment of the digital video camera 10. For the purposes of this description, the term "camera" is intended to encompass camcorder(s) as well as camera(s). One example of such a digital video camera 10 is included in the Contour 1080PTM system sold by Contour of seattle, washington.

Fig. 5, 6, 7, 8A and 8B show the optical and mechanical components of the digital video camera 10. Referring to fig. 3A-3F, 4A, 4B, 5, 6, 7, 8A, and 8B, some embodiments of the digital video camera 10 include a manual leveling control system 12 that includes a level-of-motion adjustment control for adjusting the orientation of a horizontal image plane 16 of an image recorded by an image sensor 18 relative to a housing plane 20 (in vertical cross-section) of a camera housing 22. An exemplary image sensor 18 may be a CMOS image capture card that provides a minimum illumination of 0.04Lux @ f/1.2 and provides high sensitivity for low light operation, low fixed pattern noise, anti-blooming, zero smear, and low power consumption.

Referring to fig. 3A, 3C, 3F, 4A, 6 and 7, in some embodiments, the manual leveling control is a rotary control 14 that rotates about a control axis 24 such that manual rotation of the rotary control 14 changes the orientation of the horizontal image plane 16 relative to the housing plane 20. The manual leveling controller may be used to offset the horizontal image plane 16 with respect to the pitch, yaw, and roll of the mounting position of the camera housing 22.

In some preferred embodiments, the rotary controller 14 is positioned about the lens 26 and cooperates with the lens shroud 32 to support the lens 26 within the camera housing 22 such that manual rotation of the rotary controller 14 rotates the lens 26 relative to the camera housing 22. In other embodiments, the lens 26 may remain fixed relative to the camera housing 22 even if the rotary controller 14 rotates about the lens 26. In some embodiments, lens 26 is a 3.6mm focal length, four element glass lens with a 135 ° viewing angle and a focal length covering a large range, for example from the length of the arm (e.g., 500mm) to infinity, which focuses visual information onto image sensor 18 with a resolution such as 1920 x 1080. Those skilled in the art will appreciate that many types and sizes of suitable lenses are commercially available.

In some preferred embodiments, the image sensor 18 is supported in rotational alignment with the orientation of the rotary controller 14 such that manual rotation of the rotary controller 14 rotates the image sensor 18 relative to the housing plane 20 of the camera housing 22. When the image sensor 18 is in a fixed relationship with the orientation of the rotary controller 14, the image data captured by the image sensor 18 does not require any post-capture leveling processing to obtain playback of the image data at the desired horizontal image plane 16. In particular, the rotary controller 14 may be set to a desired horizontal image plane 16, and the image sensor 18 will capture image data relative to the orientation of the horizontal image plane 16. In some embodiments, the image sensor 18 may remain stationary relative to the camera housing 22 even though the rotary controller 14 rotates about the image sensor 18.

referring to fig. 6, 7, 8A and 8B, in some embodiments, an exemplary optical assembly 34 shows how the image sensor 18 and lens 26 are supported in a rotationally consistent manner by the cooperation of the lens shroud 32, internal rotational controller 36 and rotational controller 14. In some preferred embodiments, the rotary control 14 may be spaced from the camera housing 22 by a gap 37 to facilitate rotation of the rotary control 14 relative to the camera housing 22.

The lens cover holder 38 may be threadably secured to the rotary controller 14 and cooperate with an O-ring 40a to provide support for a lens cover 42 (e.g., a piece of glass). Lens holder 44 and lens assembly holder 46 may also be employed to support lens 26 in a desired position relative to other components in optical assembly 34. Lens assembly holder 46 may be secured to lens cover holder 38 by threads and O-rings 40 b. An O-ring or bearing 43 may also be employed between the lens assembly holder 46 and the main housing 100 to facilitate rotation of the lens assembly holder 46 relative to the main housing 100 about the control axis 24. Set screws 45 may be employed to secure the lens assembly holder 46 of the optical assembly 34 to the main housing 100 without impeding rotation of the lens assembly holder 46 or the components therein. In some embodiments, the rotary controller 14, the lens cover holder 38, the O-ring 40a, the lens cover 42, the lens shroud 32, the laser source 48, the lens 26, the lens holder 44, the image sensor 18, the internal rotary controller 36, the O-ring 40b, and the lens assembly holder 46 of the optical assembly 34 may rotate together. Those skilled in the art will appreciate that several of these components may be fixed relative to the camera housing 22 or their synchronous rotation may be relaxed. For example, the lens cover 42, lens 26, and lens holder 44 need not be rotated.

Referring to fig. 8B, the rotary controller 14 may support a lens filter or other lens component, or the rotary controller 14 may include threads or other members to enable attachment of additional or alternative lens components.

In some embodiments, the rotary controller 14 cooperates with the encoder to orient the image sensor 18 to a desired horizontal image plane 16. Alternatively, the encoder may direct a post-capture leveling process to adjust the horizontal image plane 16 of the captured image so that it is converted to playback of the image data in the encoded horizontal image plane 16.

In some embodiments, the rotary controller 14 is positioned at one or both of an arbitrary location remote from the lens 26 and in an arbitrary relationship to the location of the image sensor 18. For example, the rotary controller 14 may be positioned on one side 28 of the camera housing 22, or on the back door 30, and the rotary controller 14 may remotely control the orientation of the image sensor 18, or may control the encoder. Those skilled in the art will appreciate that the arbitrarily placed manual leveling control need not be of the rotary type and may be of the electronic type rather than the mechanical type.

In some embodiments, the rotary controller 14 provides for greater than or equal to 180 ° rotation of the horizontal image plane 16 relative to the housing plane 20 of the camera housing 22 in each of the clockwise and counterclockwise directions. In one example, the rotary controller 14 provides 180 ° plus an additional rotation of greater than or equal to 6 ° in each direction, thereby providing a 360 ° rotation of the horizontal image plane 16 relative to the housing plane 20. Such adjustability includes embodiments in which the orientation of the rotary controller 14 is consistent with the orientation of the image sensor 18, as well as embodiments in which an encoder is employed. Preferably, the lens 26 and image sensor 18 rotate together 360 ° within the pivot seal container. This means that the image sensor 10 can be rotated to capture a panoramic world (level world) regardless of how the operator installs the digital video camera 10.

Referring to fig. 4A and 4B, in some embodiments, the rotation indicator 54 is disposed on an outer surface 56 of the rotary controller 14. The rotation indicator 54 may take the form of a horizontal notch or raised bar, which may be a different color than the color of the camera housing 22. Camera housing 22 may have a notch or raised bar 58 disposed in a fixed position, similar to or smaller than rotational indicator 54. The rotation indicator 54 and the notch or raised bar 58 may be the same color or different colors. The angular extent of the displacement between the rotary indicator 54 and the notch 58 provides a physical indication of the amount by which the rotary controller 14 is offset from its "home" position relative to the camera housing 22.

In some preferred embodiments, the rotation indicator 54 and the horizontal slot 58 are aligned collinearly (in the "home" position) when the horizontal image is flat and 16 is perpendicular to the housing plane 20. Thus, if the digital video camera 10 is positioned on a flat horizontal surface and the two notches are collinear, the horizontal image plane 16 will be horizontal.

Referring to fig. 3A, 3C, 3D, 3F, 4A, 7 and 8, in a preferred embodiment, one or more laser sources 48 are adapted within the rotary controller 14 to be oriented in the horizontal image plane 16 and capable of projecting light emissions to define a horizontal projection axis or plane 52 that is parallel to or coplanar with the horizontal image plane 16. Thus, as the orientation of the horizontal image plane 16 is changed relative to the horizontal projection axis 52, manual rotation of the rotary controller 14 changes the orientation of the horizontal projection axis 52 relative to the housing plane 20. The beam forming the horizontal projection axis 52 may be used as a guide for the operator to facilitate adjustment of the horizontal image plane 16 by simple rotation of the rotary controller 14 after the camera housing 22 is installed.

In some embodiments, a single laser source 48 may employ beam shaping optics and/or beam shaping apertures, filters, or films to provide a desired beam shape, such as a line, a line of decreasing or increasing size, or a smiley face figure (smiley face). In some embodiments, only a single beam shape is provided. In some embodiments, multiple beam shapes are provided and may be exchanged by, for example, manual or motorized rotation of the laser filter. Those skilled in the art will appreciate that two or more laser sources 48 may be equipped with beam shaping functionality that cooperate to provide a horizontal projection axis 52, or to provide an image of the horizontal projection axis 52 or other directing means.

In some embodiments, two laser sources 48 (or two sets of laser sources) are employed to project two beams of light that define the horizontal projection axis 52. Two laser sources 48 may be mounted on opposite sides of the lens 26 such that their positions determine the laser mounting axis that bisects the lens 26. In some embodiments, the lens shroud 32 provides support for the laser source 48 such that the laser source is positioned to emit light through an aperture 60 (fig. 7) in the lens shroud 32. In some embodiments, alternative or additional optical support barrels 32a may support laser source 48 and other optical components.

laser source 48 may be a diode laser similar to that used in laser pointers. The laser sources 48 preferably project light of the same wavelength. In some embodiments, the operator may select between several different wavelengths, such as red or green, depending on the contrast with the background color. In some embodiments, the two wavelengths may be projected simultaneously or alternately. For example, four laser sources 48 may be employed, with red and green laser sources 48 disposed on each side of the lens 26, such that if one of these colors is not against the background, the red and green horizontal projection axes 52 are projected simultaneously or alternately.

In some embodiments, the laser source 48 may be responsive to a power switch or button 64, and in some examples, the power switch or button 64 may be located on the rear door 30 of the camera housing 22. Rotation of the leveling control system 12 or the rotary controller 14 may provide an ON (ON) condition to the laser source 48 in response to a timer, which may be preset, for example, for five seconds, or may be a user selectable time period. Alternatively, a single depression of the button 64 may provide an on condition to the laser source 48, while a second depression of the button 64 provides an OFF condition. Alternatively, a single press of button 64 may provide the open condition in response to a timer, which may be preset, for example, for five seconds, or may be a user selectable time period. Alternatively, button 64 may require constant depression to maintain laser source 48 in an on condition. The button 64 may also control other functions, such as a standby mode. Those skilled in the art will appreciate that many variations are possible and are well known in the art.

Those skilled in the art will also appreciate that any type of video screen, such as those commonly found in conventional camcorders, may be attached to or part of the camera housing 22. Such a video screen and any associated touch display may also be used as feedback of orientation in conjunction with the laser source 48 or separately from the laser source 48. Those skilled in the art will appreciate that the video screen may take the form of a microdisplay mounted internally to the camera housing 22 with a viewing window through the camera housing 22 to the screen, or may take the form of an external LCD screen.

referring to fig. 3A, 3B, 3C, 3F, 4A, 4B, 5 and 6, in a preferred embodiment, the digital video camera 10 has a manually operable switch activator 80 that controls one or both of the recording conditions of the image sensor 18 and the transfer of captured image data to a data storage medium (on a 2GB MicroSD card). In some embodiments, the digital video camera 10 is designed to use pulsed power to conserve battery life while monitoring the switch activator 80. When the switch activator 80 is positioned to the open position, the pulse power system is instructed to provide full power to the electronic device and immediately begin recording; similarly, when the switch activator 80 is positioned to the closed position, the pulse power system is instructed to shut off power to the electronic device and immediately stop recording.

In some preferred embodiments, when switch activator 80 is slid or toggled, it moves a magnetic reed that is recognized by the pulse power sensor. Once the sensor recognizes that the magnetic reed has been switched to the open position, the pulse power system is then triggered to power up most or all of the electronics of digital video camera 10, including the electronics needed for recording and selected other electronics or simply all of the electronics. Once full power is provided to the system electronics, a feed input (feed) from the image sensor 18 begins encoding and writing to the data storage medium. As soon as the first frame is written to the data storage medium, a signal is sent to the LED 82 to indicate via the light pipe 84 that the digital video camera 10 is recording. Thus, activation of the switch activator 80 initiates recording almost instantaneously.

In some embodiments, the switch activator 80 powers on the electronic device and initiates recording from a standby mode, for example, after the button 64 is pushed to activate a pulsed power mode. In other embodiments, the switch activator 80 powers on the electronic device and directly initiates recording without any prior activation. In some embodiments, a video encoder engages the image sensor 18 and the microprocessor provides instructions to the video encoder. In some embodiments, the switch activator 80 is adapted to substantially simultaneously control the provision of power to the microprocessor, the image sensor 18, and the video encoder such that when the switch activator 80 is placed in the open position, the microprocessor, the image sensor 18, and the video encoder all receive power substantially simultaneously, thereby initiating video data capture operations substantially simultaneously.

In some embodiments, an audio encoder is fitted with the microphone 90, and the microprocessor provides instructions to the audio encoder. In some embodiments, the switch activator 80 is adapted to substantially simultaneously control the provision of power to the microphone 90 and the audio encoder such that when the switch activator 80 is placed in the open position, the microprocessor, the microphone 90 and the audio encoder all receive power substantially simultaneously, thereby initiating audio data capture operations substantially simultaneously.

In some embodiments, when switch activator 80 is placed in the closed position, microprocessor, image sensor 18, and video encoder all stop receiving power substantially simultaneously, thereby stopping video data capture operations substantially simultaneously. In some embodiments, when switch activator 80 is placed in the closed position, microprocessor, microphone 90 and audio encoder all stop receiving power substantially simultaneously, thereby stopping audio data capture operations substantially simultaneously.

In some embodiments, the microprocessor, image sensor 18, video encoder, microphone 90, and audio encoder all receive power substantially simultaneously, thereby initiating video data and audio data capture operations substantially simultaneously. In some embodiments, the microprocessor, image sensor 18, video encoder, microphone 90, and audio encoder all stop receiving power substantially simultaneously, thereby stopping video data and audio data capture operations substantially simultaneously.

In some embodiments, the switch activator 80 controls the provision of power to the additional electronics such that the additional electronics are deactivated when the switch activator 80 is in the closed position and such that the additional electronics are activated when the switch activator 80 is in the open position.

Those skilled in the art will appreciate that the switch activator 80 can be designed to have more than two sliding settings. For example, in addition to the on and off settings for recording, the switch activator 80 may also provide intermediate settings to activate the laser source 48, activate one or more status indicators, or enable other functions in the digital video camera 10.

Embodiments using a magnetic reed switch as switch activator 80 prevent water or other fluids from entering camera housing 22. Those skilled in the art will appreciate that other waterproof on/off switch designs are possible. In a preferred embodiment, the digital video camera 10 also employs a waterproof microphone 90, such as an omni-directional microphone having a sensitivity of-44 ± 2dB and a frequency range of 100 and 10,000Hz, for capturing audio data and providing them to the data storage medium or a second data storage medium. Alternatively, the camera housing 22 may include a breathable waterproof material (e.g., goretex (tm)) to prevent the egress of water without the need for a waterproof microphone 90. Those skilled in the art will appreciate that microphones with various operating parameters suitable for microphone 90 are commercially available or may be manufactured to meet desired criteria.

In some embodiments, the microphone 90 is positioned below the switch activator 80 such that the switch activator 80 covers the microphone 90 whenever the switch activator 80 is in the closed position and such that the switch activator 80 exposes the microphone 90 whenever the switch activator 80 is in the open position. The audio data capture operation is preferably deactivated when the switch activator 80 is in the closed position and the audio data capture operation is preferably activated when the switch activator 80 is in the open position. The on and off condition of the audio data capture operation may be controlled by switch activator 80 in conjunction with the on and off condition of the video capture operation.

Referring to fig. 5 and 6, in some embodiments, the camera housing 22 includes a main housing 100 supporting the switch activator 80, a front and bottom trim (trim piece)106, and a rear door 30 connected to the main housing 100 by a hinge 102. In some embodiments, the rear door 30 may be removable through its hinges 102 to allow accessories to be connected to the main housing 100 for extended functionality. The rear door 30 may provide an area of thinner material to allow for compression of the button 64. Gasket 114 may seal between main housing 100 and rear door 30 to provide water resistance. The housing cover 108 may be connected to the main housing 100 by a rubber gasket 110, the rubber gasket 110 also enhancing the waterproof properties of the camera housing 22.

The side cover 112 may be ultrasonically welded to the outer surface of the housing cover 108 and the lower portion of the main housing 100, which forms the lower portion of the side 28 of the camera housing 22. In some embodiments, the camera housing 22 is made of brushed aluminum, baked glass fiber, and rubber. In particular, the main housing 100, the housing cover 108, and the side cover 112 may be made of aluminum. Front and bottom trim pieces 106 may also be ultrasonically welded to main housing 100.

Referring to fig. 3A, 3B, 4A, 4B, 6 and 9, in a preferred embodiment, the digital video camera 10 includes a portion of a mounting system 120 having two or more housing rail cavities 122 and two or more staggered housing rails 124 on each side 28 of the camera housing 22 for engaging a universal mount 126. An example of such an installation system 120 is the TrailTM installation system sold by Contour corporation of Seattle, Washington.

The housing rail cavities 122 and housing rails 124 may be formed by cutouts in the side covers 112 that are mounted to the main housing 100. In some embodiments, digital video camera 10 is bilaterally symmetric and has an equal number of housing rail cavities 122 on each side cover 112 and an equal number of housing rails 124 on each side cover 112. In some embodiments, the digital video camera 10 may provide, for example, two housing rail cavities 122 (such as shown in fig. 3A and 3B) or three housing rail cavities 122 (such as shown in fig. 4A and 4B) in each side cover 112. However, those skilled in the art will appreciate that in some embodiments, the digital video camera 10 need not be symmetrical and may have an unequal number of rail cavities 122 on its side covers 112.

In some embodiments, the rail cavity 122 has a cross-sectional appearance resembling a "T", a wedge, or a trapezoid. Those skilled in the art will appreciate that the dimensions of the stem or transverse branches of the "T" may be different. For example, the stem may be thicker than the branches, or one or more branches may be thicker than the stem; similarly, a stem may be longer than a branch, and one or more branches may be longer than a stem. The cross-sectional shape may have flat sides or corners, which may be rounded. Those skilled in the art will also appreciate that a number of other cross-sectional shapes for the rail cavities 122 are possible, and that the cross-sectional shapes of different housing rail cavities 122 need not be the same, whether in the same side cover 112 or in different side covers 112. Similarly, the housing rail cavities 122 can have different lengths, and the housing rails 124 can have different lengths. The bottom of the trim piece 106 may alternatively or additionally be provided with a housing rail 124.

In some embodiments, one or more of the housing rail cavities 122 may contain one or more bumps (bump) or detents (detent) 128. In some embodiments, each side 28 of the camera housing 22 includes at least one tab or detent 128. In some embodiments, each housing rail cavity 122 contains at least one tab or detent 128. However, in some embodiments, only a single housing rail cavity 122 on each side 28 includes a tab or detent 128. Those skilled in the art will appreciate that the different sides 28 need not contain the same number of bumps or detents 128.

Fig. 9 shows a base mount 130 and a rail plug 132 that fit together to form the flat surface mount 134 shown in fig. 10. Fig. 11A-11D (fig. 11) depict different views of the camera housing 22 mated with the flat surface mount 134. Referring to fig. 9-11, the rail plug 132 includes one or more mounting rails 136 adapted to mate with the housing rails 124 on the camera housing 22. Similarly, rail plug 132 includes one or more mounting rail cavities 138 adapted to mate with housing rails 124 on camera housing 22. The mounting rail 136 may have the same or different cross-sectional shape as the housing rail 124 and the mounting rail cavity 138 may have the same or different cross-sectional shape as the housing rail cavity 122. In some preferred embodiments, rails 124 and 136 and cavities 122 and 138 have the same cross-sectional profile.

In some embodiments, one or more of the mounting rails 136 on the rail plug 132 may include one or more detents or tabs 140. In some embodiments, each mounting rail 136 includes at least one detent or tab 140. However, in some examples, only a single mounting rail 136 contains a detent or tab 140. The detent or tab 140 is adapted to mate with the tab or detent 128 such that the rail plug 132 has the tab 140 if the camera housing 122 has the detent 128 or the rail plug 132 has the detent 140 if the camera housing 122 has the detent 128. Those skilled in the art will appreciate that in some alternative embodiments, the housing rail 124 has a detent or detent 128 and the mounting rail cavity 138 has a detent or tab 140.

The universal mounting system 120 facilitates mounting and orienting the digital video camera 10 and facilitates dismounting the digital video camera 10 while retaining the mounting orientation. In some embodiments, the pedestal mount 130 may have a very small footprint and may be affixed to a surface with adhesive pads (adhesive pads) designed for outdoor use. After the base mount 130 has been attached to a surface, the rail plug 132 may be detached from the base mount 130.

in some embodiments, the rail plug 132 has a circumferential serrated edge 142 that is coupled to a serrated receiving inside edge 144 of a base mounting cavity 146 adapted to receive the rail plug 132. In some embodiments, rail plug 132 is compression fit within base mount 130. In some embodiments, hook and loop double-tooth Velcro (TM) may be used to further secure rail plug 132 within base mount 130 instead of or in addition to compression-fit techniques.

As mounting rail cavities 138 of rail plugs 132 slide onto housing rails 124 of camera housing 22, mounting rails 136 of rail plugs 132 may slide into housing rail cavities 122 of camera housing 22, as indicated by directional arrow 148 (fig. 9), to secure rail plugs 132 to camera housing 22. The mating detents and tabs 128 and 140 may be engaged to prevent unintended lateral movement of the rail plug 132 relative to the camera housing 22. The rail plug 132 with the attached digital video camera 10 may be rotated from zero degrees to 360 degrees about an axis perpendicular to the base mount 130 to capture a desired viewing angle. The rail plug 132 may then be inserted or reinserted into the base mount 130, as indicated by directional arrow 150 (fig. 9). Fig. 11 shows how digital video camera 10, rail plug 132 and base mount 130 behave when they are mated together from several different angles.

In some embodiments, rail plug 132 and base mount 130 may be made of a hard but flexible material, such as rubber or a polymer with similar properties. Those skilled in the art will appreciate that rail plugs 132 and base mounts 130 may be made of hard or soft plastic. Because the base mount 130 may be flexible, it may be attached to a variety of surfaces, such as the surfaces of helmets, ski decks, snowboards, fuel tanks, windows, doors, and canopies. The type and flexibility of the material of the flat mount 126 may provide a "rubber" cushioning effect as well as enhance rail sliding, rail engagement, and plug engagement. The mounting system 120 may also include a runaway belt (not shown).

When the recording of the activity is complete, the rail plug 132 with the attached digital video camera 10 may be disengaged from the base mount 130 for safe storage or data upload. The mount 130 may remain attached to the surface and need not be reattached and/or readjusted. Alternatively, the camera housing 22 may be disengaged from the rail plug 132, leaving the rail plug 132 engaged with the base mount 130 such that the original orientation of the mounting rail 136 of the rail plug 132 is maintained to allow for quick reattachment of the digital video camera 10 without the need to reorient it to the base mount 130, or the base mount 130 being mounted to a person, device, or vehicle.

Fig. 12 shows an alternative rail plug 132 a; fig. 13A, 13B, 13C, 13D, and 13E (fig. 13) show several views of a rail plug 132a with an alternative base mount 130a, including a locked configuration and an unlocked configuration, to form a lever mount 126a for mounting on a lever 160 (e.g., a handle lever). Referring to fig. 12 and 13, the rail plug 132a may be used as a stand-alone mount with an adhesive backing, or it may be used in conjunction with, or integrated into, one or more kinds of base mounts 130 a. Rail plug 132a may be attached to base mount 130a by mounting using an adhesive, by using velcro, by using screws, by using other conventionally known means, or a combination thereof. Mounting rails 136 may be formed to provide holes 162 to provide screw and screwdriver access to mount rail plugs 132a on base mount 130 a.

The base mount 130a is configured to open and close around a pole 160, particularly a pole of a standardized entertainment device, especially such a pole having a small diameter of about 1-1.5 inches (2.5-3.8 cm). In some embodiments, the base mount 130a has a locking pin 164 of a head 166 that may be secured within a locking chamber 168. Locking pin 164 adds compression to bar 160 to prevent rotation about bar 160 after the desired position of base mount 130a is determined. The base mount 130a may also be provided with a pin door cover 170 to prevent debris from entering the locking pin 164 or locking chamber 168.

fig. 14A, 14B, 14C and 14D (fig. 14) show several views of a rail plug 132B with an alternative base mount 130B (including a strap 172) to form a pole mount 126B for mounting on a pole 160B, such as a roller cage, windsurfing mast or hang glider support. Referring to fig. 14, in some embodiments, the harness 172 is large enough to accommodate poles 160b having diameters up to 4 inches (12cm) or more. In some embodiments, a dial 174 may be used to tighten and loosen the harness 172. In other embodiments, the dial 174 controls the rotation of the rail plug 132b relative to the base mount 130b so that the side-to-side angle of the digital video camera 10 can be adjusted. Like rail plug 132a, rail plug 132b may be attached to or may be integrated with base mount 130 b.

Fig. 15A, 15B and 15C (fig. 15) show several views of a rail plug 132C, the rail plug 132C being attached to or integrated with alternative base mounts 130C and 130e of respective belt or strap mounts 126C and 126e for mounting on a belt, strap or band 180, such as the band 180 of a pair of goggles 182. Referring to fig. 15A, the base mount 130e has a damper 184a and a strap entry 186a on the inside of the base mount 130e, i.e., facing in a direction opposite to the direction that the mounting rail 136 faces. The damper 184a may be made of rubber or other suitable cushioning material to cushion the head of the user from the digital video camera 10.

Referring to fig. 15B, the damper 184B is disposed on the inner side of the base mount 130c, i.e., facing in a direction opposite to the direction in which the mounting rail 136 faces. However, the strap entrances 186b are disposed on the outside of the base mount 130c, i.e., facing the same direction as the mounting rails 136. Fig. 15C shows the base mount 130C of fig. 15B mounted on a strap 180 of a goggle 182. Those skilled in the art will appreciate that rail plug 132a may be substituted for rail plug 130 c.

Fig. 16 shows a rail plug 132d having an alternative base mount 130d to the helmet mount 126d for mounting on a ventilated helmet. The helmet mount 126d includes one or more slots 190 through which straps can be used to secure the base mount 130d to the helmet through ventilation slots in the helmet. Those skilled in the art will appreciate that rail plug 132a may be substituted for rail plug 132 d.

Fig. 17 is a front perspective view of another alternative eyewear base mount 130f adapted for attachment to a goggle strap 180 (fig. 15C) using a strap. The straps 192 may be looped through buckles 194 and 196 to secure the base mount 130f to the goggle strip 180. The mount 130f is adapted to receive a circular rail plug 132 (fig. 10) that allows 360 degree rotation of the mounting rail 136. Such embodiments allow a user to adjust the angle of the digital video camera 10 to be different from the user's vertical viewing angle. For example, the user may look down on the floor while the digital video camera 10 (and its graphics sensor 18) captures images directly in front. In some embodiments, the base mount 130f may include pads 198 and 202 to dampen vibrations, and may include a retaining tab 200 to prevent the rail plug 132 from being inadvertently loosened by vibrations. Harness 192 may also or alternatively include pads 204 and 208.

Those skilled in the art will appreciate that the mount mounts 130a-130 d may alternatively be configured to receive a circular rail plug 132 (fig. 10) that allows 360 degrees of rotation of the mounting rail 136. For example, fig. 18 shows an alternative rod mount 126g having a base mount 130g, the base mount 130g adapted to receive a circular rail plug 132 that allows 360 degree rotation of the mounting rail 136. Such embodiments may help compensate for handle bars or other bars 160 or 160b that may be tilted forward or backward.

in some embodiments, base mount 130g has a different locking mechanism than base mount 130a (fig. 13). For example, in some embodiments, locking pin 210 is attached to base mount 130g by hinge 212, and locking pin 210 is attached at its other end to pin door cover 214 by hinge 216. Locking pin 210, in conjunction with hinge door 214, increases compression on rod 160 to prevent rotation of base mount 130g about rod 160 after its desired position is established. Those skilled in the art will appreciate that the base mount 130a may alternatively employ this locking mechanism. In some embodiments, base mounts 130a and 130g include pole grips 218 to help maintain a preferred orientation of base mounts 130a and 130g relative to pole 160. In some embodiments, the base mounts 130 and 130a-130g may include belt loops 220 adapted to receive belt lines that may be attached to the associated rail plugs 132 and 132a-132d, the digital video camera 10, or an operator.

fig. 19 and 20 are perspective and top views, respectively, of a mounting system 300, the mounting system 300 including a rotatable web-shaped rail plug 132 disposed at a base mount 130h configured with a locking feature that allows the digital video camera 10 to be adjusted when attached to a mounting surface. Fig. 21 and 22 are perspective and top views, respectively, of the base mount 130 h. The base mount 130h has a generally rectangular shape and includes a large diameter circular opening 304 in its top wall 302 and a smaller diameter circular opening 308 in its bottom wall 306. The base mount 130h has opposed side walls 310 and 312 and opposed side walls 316 and 318, with the side walls 310 and 312 defining an aligned generally rectangular slot 314 of the same size and with the spaced-apart, aligned, serration-receiving edges 144 formed on the inner surfaces of the side walls 316 and 318. The inner surfaces of the side walls 310, 312, 316 and 318 include arcuate segments that are sized to allow bi-directional ratcheting-type rotational movement of the circular rail plug 132 when the circular rail plug 132 is disposed through the circular opening 304 in the base mount 130h in which the serrated receiving edge 144 is in matable relation with the circumferential serrated edge 142.

Fig. 23A, 23B, 23C, 23D and 23E are perspective, top, end, side and bottom plan views, respectively, of a generally rectangular shaped slidable locking member 330. The slidable locking member 330 is sized to fit within each slot 314 and slidably extend through and out of either of the side walls 310 and 312 when inserted into the two slots 314 in the base mount 130 h. The locking member 330 is a unitary structure that includes a generally planar central portion 332 disposed between a locking end piece 334 and a non-locking end piece 336. The central portion 332 constitutes a recessed area that is bounded by raised end pieces 334 and 336 and into which the circular rail plug 132 is inserted when the mounting system 300 is assembled. Central portion 332 includes an elliptical aperture 338 having opposing circular segments 340 separated by straight segments 342. U-shaped slots 344 cut in the central portion 332 on either side of the oblong aperture 338 provide downwardly depending locking tabs 346. Locking tabs 346 are sized and locking tabs 346 are configured to slide through and fit into corresponding grooves 350 in a bottom plate (floor)352 of base mount 130 h. The locking end piece 334 has a serrated arcuate inner surface 354 and the non-locking end piece 336 has a smooth arcuate inner surface 356. The curvature of the arcuate inner surfaces 354 and 356 is complementary to the curvature of the circular rail plug 132.

Fig. 24 is an exploded view of a mounting system 300 to which an exemplary attachment mechanism is attached. When mounting system 300 is assembled, locking member 330 is mounted in base mount 130h with end pieces 334 and 336 adapted for sliding movement in slot 314. A plug 360 comprised of a top disk 362 and two downwardly depending legs 364 secures the locking member 330 within the slot 314 in the base mount 130h and limits the range of travel within the slot 314. The top tray 362 fits into the recess thereby receiving the rail plug 132 and a flange 366 extending from the free end of the leg 364 secures the plug 360 in the base mount 130h when the free end of the leg 364 is pushed through the circular opening 308.

The mounting system 300 operates in the following manner. The user adjusts the angular position of the digital video camera 10 by rotating the rail plug 132 within the base mount 130h, the digital video camera 10 being operatively connected to the mounting rail 136. To allow for such rotation, the user pushes the non-locking end piece 336 to slide the locking member 330 such that the serrated inner surface 354 moves away from and does not engage the serrated edge 142 of the rail plug 132. The legs 364 of the plug 360 contact the boundaries of the oblong aperture 338 thereby stopping the sliding movement of the locking member 330 with the locking end piece 334 projecting outwardly from its associated slot 314. Locking tabs 346 fit in their respective grooves 350 to releasably retain locking member 330 in its unlocked position. Rotation of the rail plug 132 provides audible, tactile feedback to the user due to the mating relationship between the serrated receiving edge 144 and the serrated edge 142.

After the angular position adjustment of the digital video camera 10 is completed, the user locks the rail plug 132 in place by pushing the locking end piece 334 to slide the locking member 330 so that the serrated inner surface 354 engages the serrated edge 142 of the rail plug 132. The sliding movement of the locking member 330 is stopped with the non-locking end piece 336 protruding outwardly from its associated slot 314. The locking tabs fit into their respective grooves to releasably retain the locking member 330 in its locked position.

The base mount 130h may be directly mounted to the mounting surface by using an adhesive. The base mount 130h may also be coupled to a variety of mounting surfaces by adding custom coupling plates (e.g., strap coupling plate 370), by screws 372 or another technique such as adhesive bonding or welding. These webs may change the shape of base mount 130h to better connect to a formed surface, or may include various attachment mechanisms, such as straps 374 or hooks.

referring again to fig. 3B, 3E, 4B, and 5, the button 64 (or additional buttons 388) may control one or more status indicators, such as the LED 82, that indicate through the light pipe 84 that the digital video camera 10 is recording. For example, the button 64 (or additional buttons 388) may also, for example, control the operation of the LED 390, which LED 390 indicates the battery (not shown) power status via the light pipe 392. In some embodiments, a single push controls two or more status indicators (or all status indicators, and may also control the laser source 48 as well as the recording standby mode).

in some embodiments, the status indicator may provide different colors depending on the status of the item under consideration. In some embodiments, green, yellow, and red LEDs are used to indicate whether the battery is fully charged, semi-charged, or nearly depleted. Similarly, in some embodiments, green, yellow, and red LEDs are used to indicate whether the SD memory card is nearly empty, half-empty, or nearly full. In other embodiments, a green light indicates greater than or equal to 80% space or charge, a yellow light indicates greater than or equal to 30% space or charge, and a red light indicates less than 30% space or charge. Those skilled in the art will appreciate that the number and meaning of colors may vary. The camera housing 22 may provide symbols indicating which items the light pipes 84 and 392 represent, such as a battery symbol 394 and a memory card symbol 396 on the back door 30

To facilitate simpler and more manageable processing of the video once it is recorded, the digital video camera 10 may be designed to automatically segment the video into computer and web page (web ready) file sizes. The segmentation may be determined automatically by hardware during the recording process without user intervention. In some embodiments, the software will automatically close the video file and open a new file at a predetermined boundary. In some embodiments, the boundary will be time-based, e.g., 10 minutes per segment, or size-based, e.g., 10MB per segment. In addition, the segmentation process may be designed so that file boundaries are based on predetermined limits, or the user may adjust the segment length to the user's own preferred time. In some embodiments, a video encoder (either hardware or software based) optimizes file boundaries by delaying the boundaries relative to nominal boundary positions until a period of time with relatively static video and audio, i.e., when motion changes are minimal. However, those skilled in the art will appreciate that in some embodiments, such segmentation may be implemented by software or hardware.

The digital video camera 10 is an all-in-one shooting and storage digital video camcorder and is designed to operate in extreme weather conditions and in a hands-free manner. The digital video camera 10 is wearable and designed for harsh environments (water, hot, cold, extreme vibration), and the content 1080PTM system includes an application mount 126 to attach to any person, device, or vehicle. The internal components of the digital video camera 10 may be silicon treated, silicon coated or silicon insulated from elements to keep the digital video camera 10 operational despite mud, dirt, snow and rain.

The preferred embodiment of the digital video camera 10 is equipped with wireless connection protocols and global navigation and position determination, preferably Global Positioning System (GPS), technology to provide remote image acquisition control and viewing. A packet-based open wireless technology standard protocol is used to provide control signals or streaming data to the digital video camera 10 and to access image content stored on or streamed from the digital video camera 10. GPS technology enables the location of the digital video camera 10 to be tracked while the digital video camera 10 records image information. The implementation of the protocols and GPS technology in the digital video camera 10 is described in detail below.

The preferred embodiment of the digital video camera 10 allows for the inverted mounting of the camera housing 22 while maintaining the correct orientation of the video image through a mechanical or electrical 180 rotation of the lens 26. Fig. 25A, 25B, 25C, 25D, and 25E show mechanical rotation. Fig. 25A, 25B, 25C, and 25D are front perspective views of the digital video camera 10 showing the lens 26 disposed in a vertical position with the camera housing 22 of the digital video camera 10 rotated 90 counterclockwise, 90 irrotational, 90 clockwise, and 180 to an inverted position, respectively, relative to the vertical position. Fig. 25E is a front view of the digital video camera 10 in the orientation of fig. 25B, labeled with dimension lines indicating 185 ° counterclockwise and 95 ° clockwise ranges of angular displacement of the horizontal image plane 16, which is achieved by manual rotation of the rotary controller 14. The orientation may be flipped by simply changing the pixel selection prior to signal processing, or the orientation may be flipped during signal processing by simply changing the interpretation of the pixels. The orientation may be automatically controlled by sensing the orientation of camera housing 22 using a variety of sensors and changing the pixels based on this data.

Fig. 26A and 26B, fig. 27A and 27B, fig. 28, and fig. 29A and 29B show the configuration of the digital video camera 10 in which the wireless protocol and GPS technology are implemented to enable remote image acquisition control and viewing. Fig. 26A and 27A are front perspective views of the digital video camera 10 with the slidable switch activator 80 in its record-on and record-off slide settings, respectively; fig. 26B and 27B are top views of the digital video camera 10 with the slidable switch activator 80 in its record-on and record-off slide settings, respectively. In these figures, a portion of the switch activator 80 is broken away to reveal the placement of certain internal components described in more detail below.

fig. 28 is a partially exploded view of the digital video camera 10 showing the placement and mounting arrangement of the parts implementing the wireless protocol and GPS receiver technology in the main housing 100 shown in fig. 5 and 6. The wireless module 400 is mounted in the main housing 100 at a position close to the rotary controller 14. The GPS assembly 402 is mounted in the main housing 100 at a location proximate the rear door 30 of the camera housing 28. An optical support cylinder 32a having an open ended slot 404 is mounted on the main housing 100 in an orientation such that the upper ends of the wireless module 400 and GPS assembly 402 are mounted and thereby exposed within the slot 404. The switch activator 80, which is provided with a two-dimensional array of circular openings 406, is mounted within the slot 404 and slides within the slot 404 between a recording-on slide-set position, shown in fig. 26A and 26B, and a recording-off slide-set position, shown in fig. 27A and 27B. Opening 406 provides an audible acoustic path to facilitate spoken words or other sound effects picked up by microphone 90.

A common realization of sliding switches with long travel involves the use of a magnet to pull and hold the switch in its final position, or a switch mechanism that is constantly pressed by the user over the entire travel distance and is provided with a holding mechanism in place (in place) in the open and closed positions. The digital video camera 10 is equipped with a slide switch mechanism that addresses the problems associated with long travel distances. The scissor spring 408 assists in actuating the slidable switch activator 80 between the record open and closed slide set positions over a long range of travel.

Fig. 26B, 27B and 28 illustrate a preferred shape of the scissor spring 408 and the manner in which it fits the geometric features of the inner sidewall surface 410 and the inner end wall surface 412 formed in the lower cavity 414 of the switch activator 80. The scissor spring 408 is a one-piece wire member that includes a plurality of bends (beams) that form a U-shaped center portion 420 having rounded distal ends 422, with a leg portion 424 extending back up from each rounded distal end 422 to the center portion 420. The U-shaped central portion 420 includes a base member 426 and two generally parallel side members 428 terminating in a rounded distal end 422. The upwardly extending leg portions 424 are generally outwardly offset from the side members 428 and terminate at ends 430 that curve inwardly toward the side members 428 and do not extend beyond the central portion 420. The curved portion 432 in each leg portion 424 forms an inwardly directed curve thereof and provides a bearing surface that contacts the inner sidewall surface 410 of the switch activator 80.

26A, 26B, 27A and 27B illustrate geometric features in the inner sidewall surface 410 and inner sidewall surface 412 of the switch activator 80. Each side wall surface 410 includes an inwardly directed ramp portion 440 having an apex 442 and proximal and distal ends 444 and 446, respectively, adjacent and distal to end wall surface 412.

The mounting of the scissor springs 408 in the main housing 100 involves the placement of the U-shaped center portion 420 with the base member 426 and the side members 428 against the tabs 450 on the top surface 452 of the Printed Circuit Board (PCB) of the GPS assembly 402. The length of the base member 426 is selected so that the tab 450 fits snugly within the U-shaped central portion 420 to hold the scissor spring 408 stationary during sliding movement of the switch activator 80. As shown in fig. 26A and 26B, the curved section 432 of the scissor spring leg portion 424 rests in a shallow notch formed at the distal end 446 of the ramp portion 440 whenever the switch activator 80 is in the record open slide set position. As shown in fig. 27A and 27B, whenever user slide switch activator 80 is moved from the record open slide set position to the record closed slide set position, curved section 432 exits the shallow slot opening at distal end 446, slides along the entire length of ramp section 440 and rests in the shallow slot opening formed at proximal end 444 of ramp section 440. The curved section 432 of the leg portion 424 has a shape that is complementary to the curved section 448 of the inner end wall surface 412.

The shaping of the scissor spring 408 applies a resistance force to prevent initial sliding movement of the switch activator 80 in either direction, but in response to the user-applied pressure overcoming the resistance force, the switch activator 80 automatically travels to the rest position without user assistance. The scissor spring 408 applies a passive resistance to any movement, thus holding the switch activator 80 in place until the user again moves the switch activator 80. For example, the shape of the scissor spring 408 may vary depending on the geometry, stroke length, and desired holding force of the switch activator 80.

The spring solution described above is uniquely vibration resistant and well suited for high vibration environments. The scissor spring 408 is an improvement over the magnetic slide switch motion because it does not introduce magnetic interference that could affect other functions in the digital video camera 10. The scissor spring 408 is also an improvement over the dual detent implementation because the user is assured that the switch activator 80 is in place. This spring solution can be extended to include a combination of multiple springs to provide a specific motion or a specific force profile. This spring solution can also control linear or circular motion.

Fig. 29A and 29B show perspective and exploded views, respectively, of the GPS assembly 402 separate from the main housing 100, with the GPS assembly 402 mounted for operation in the digital video camera 10. The GPS assembly 402 includes a GPS passive patch antenna 456 and a GPS receiver module 458 to provide GPS functionality to the digital video camera 10. A GPS ground plane 460 in the form of a stepped generally U-shaped aluminium housing is provided between the patch antenna 456 and the GPS printed circuit board 454 and is secured to the top surface 452 of the latter by a GPS ground plane mounting strap 462. The GPS receiver module 458 is mounted to the GPS printed circuit board 454 on a bottom surface 464 thereof. The preferred GPS patch antenna 456 is the model PA1575MZ50K4G-XX-21, a high gain, customizable antenna provided by INPAQ technologies, Inc. of Taiwan, China. The GPS patch antenna 456 is custom tuned to its peak frequency to account for detuning effects of the edge of the optical support cylinder 32 a. The preferred GPS receiver module 458 is a model NEO-6 module available from Switzerland u-blox corporation.

Fig. 29A and 29B show that the GPS ground plane 460 is physically shaped to complement or mirror the curved shape of the optical support cylinder 32a of the housing 22 so that the ground plane area can be maximized because the shape of the ground plane conforms to (i.e., does not change) the shape of the camera housing 22. Further, the GPS patch antenna 456 is supported by its own internal ground plane that is arranged so that it overlaps the inside of an existing aluminum housing. This overlap allows RF current to pass between the aluminum housing and the GPS ground plane 460 through capacitive coupling, thus having the effect of increasing the size of the overall ground plane area. This increased ground plane area further improves GPS reception. In addition, the GPS patch antenna 456 is tuned by coupling these components for optimal reception of the overall system. The customization of the ground plane and the electrical coupling to the camera housing 22 or other metal components of the digital video camera 10 improves performance by enabling higher antenna gain and resulting enhanced signal reception when the digital video camera 10 is installed in multiple locations.

When recording video or taking pictures in sports applications, the digital video camera 10 is typically mounted in a location that does not allow the user to easily see the camera. Implementing the digital video camera 10 with a wireless connection protocol enables remote control of operations on and remote access to image data stored in the digital video camera 10. In a preferred embodiment, integrating wireless technology in the wearable digital video camera 10 facilitates several features including remote control, frame optimization, multi-camera synchronization, remote file access, remote viewing, data acquisition (in conjunction with GPS capability), and multiple data source access (in conjunction with GPS capability).

Implementing wireless technology in the digital video camera 10 enables a user to control it remotely using a telephone, computer or dedicated controller. This allows the digital video camera 10 to remain smooth (sleep), with few buttons, and no screen. In addition, no access to a screen or controller is required to provide more flexibility in installing the digital video camera 10.

A remote control device (i.e., a telephone, computer, dedicated viewer, or other enabling device) may access files stored on the digital video camera 10 to allow a user to view the content in those files and manage them on the camera. Such access may include file transfer or file playback in the case of video or audio content.

Using wireless signaling, the remote device may have access to data streamed from the digital video camera 10. Such data may include camera status, video, audio, or other collected data (e.g., GPS data). Standard video may exceed the bandwidth of the connection. To address any quality of service issues, a snapshot mode is used to simulate the video. In this case, the photos are taken continuously and then streamed and displayed in sequence to simulate video playback. Firmware in the main processor captures and streams the photos, and the receiving application is designed to display the photos in rapid succession. To save space, the photos may be stored in a FIFO buffer, so that only limited playback is provided.

Alternative implementations of the remote viewer include one or more of reduced resolution or frame rate, file segmentation, frame sampling, and Wi-Fi to the media server. Reduced resolution or frame rate involves recording video in two formats (high quality and low quality), where a lower quality file is streamed back or played back after the recorded action has occurred. For streaming implementations, the wireless connection bandwidth may be monitored to suit the available bandwidth, resolution, bit rate, and frame rate of the secondary recording. Furthermore, buffering may be used in conjunction with adaptive bitrate control. File segmentation involves splitting a recording into small files and transmitting each file after completion to allow viewing by a wireless device in near real time. File transfers may be delayed in order to limit interruptions due to bandwidth limitations. Frame sampling is the taking of real-time video frame samples (e.g., video-only compressed intra-frame (I-frame)). Wi-Fi to media server involves using Wi-Fi to set up a camera as a media server on a selected network, allowing other devices to read and play content accessed from the device.

Fig. 30 is a simplified block diagram illustrating a preferred implementation of wireless technology in the digital video camera 10. Fig. 30 shows a digital video camera 10 with a built-in wireless module 400, the built-in wireless module 400 making a Mobile device (e.g., a smartphone and a tablet) a wireless handheld viewfinder in response to a content Connect Mobile App application software (content Connect Mobile App application software) executing on the operating system of such Mobile device. A contact Connect Mobile App compatible for use with the company's iOS Mobile operating system is available on the iPhone App Store, and a contact Connect Mobile App compatible for use with the Google company's Android Mobile operating system is available on the Android Market. The firmware of the main processor 500 stores an updated version of compatible software in response to the contact Connect Mobile App application software executing on the Mobile device. This wireless connection capability enables a user to configure camera settings and preview what the digital video camera 10 sees in real time. Specifically, the user can check the camera angle on the wireless device screen and, without guesswork, aim the camera to take and adjust the video, brightness level, and audio settings before starting the activity he or she wants to record.

The functionality allowed on an industry standard interface is often limited by the receiving or sending device according to its permissions. This means that one device may refuse to allow certain functions if another device does not have the correct credentials or authentication. For example, iphones and similar products require some security authentication with respect to data signals transmitted using an interface. The security requirements for such interfaces vary from product to product and manufacturer to manufacturer. Typically, the same product is intended to be connected to multiple devices, and it is not desirable to integrate security features for all possible features or external devices.

In a preferred embodiment, the signal path is designed such that the presence of this secure integrated circuit is not required for the full functionality of the other device. However, by including a connector in the signal path, the security module can be added by the user after manufacture to allow connection with such controlled devices. By including such a connector in the signal path, the relevant signal security module can be provided separately for those applications that require such security authentication. Further, in a preferred embodiment, the secure card is packaged separately as a self contained card (self contained card). The circuit is designed to preserve authentication integrity but is connected to the control device through a standard connector. Fig. 30 also shows the placement of a content connection View (security) Card 502 (security Card 502) in the Card slot and connector 504 of the digital video camera 10 to enable connection with a supported iOS device. The contact Connect View card is available from contact corporation, the assignee of this patent application.

Fig. 31 is a flow chart showing pairing of two devices through a wireless connection. The main processor 500 of the digital video camera 10 stores data files identifying enabled viewer/controller devices 510. (the appearance of a smiley face icon in the flowchart indicates an action taken by the user or display status information to the user.) the user presses a wireless connection activator button (preferably located near the switch activator 80 but not shown in the figures) on the camera housing 22 to turn on the module 400, which sends a ("BT") connection request signal to the connection-enabled viewer/controller 510. The watcher/controller 510 receives the connection request signal, determines whether there is an ID connection matching pair, and after identifying the matching pair, determines whether the watcher/controller 510 is iOS or android implemented. If an android implementation and thus security is not required, the viewer/controller 510 allows and initiates a ContourConnect Mobile App meter to perform data transfers to and from the digital video camera 10. If it is an iOS implementation and security is required, the observer/controller 510 sends a security challenge signal for passing through the module 400 and the main processor 500 to the co-processor 514 installed on the security card 502. The coprocessor 514 sends the security code for communication to the viewer/controller 510 via the main processor 500 and module 400, the viewer/controller 510 validating the security code and allowing and enabling a contact Connect Mobile App to perform data transfers to and from the digital video camera 10.

Using the data file to identify the ID of the device allows the two devices to pair when neither device has a display. Fig. 32 is a flowchart showing an example of the paired microphone 90 and the digital video camera 10, both of which do not have a display screen. The digital video camera 10 and the controller 510' are initially paired by a wireless data connection and the content Connect Mobile App is active as described above with reference to fig. 31. The viewer/controller 510' has a similar structure except that the latter does not have a display screen. The user slides the switch activator 80 to its open position to supply power to the microphone 90 and sends a pairing request signal to the digital video camera 10, which the digital video camera 10 detects and forwards to the controller 510' for confirmation. The user responds to the pairing request by manipulating an actuator associated with controller 510'. If the user actuation indicates rejection of the pairing request, the controller 510' ends the pairing process. If the user actuation indicates acceptance of the pairing request, the controller 510' sends a confirmation signal to the digital video camera 10 along with the password (if required by the microphone 90). Upon receipt of the confirmation signal, the digital video camera 10 sends the confirmation signal and any password to the microphone 90, whereby audio data capture and recording is initiated by the audio encoder in the digital video camera 10, thereby completing the pairing.

FIG. 33 is a flow chart illustrating a preferred camera position adjustment procedure performed by a user wearing a helmet (e.g., a bike or snowboard rider or skier) to align a digital video camera 10 mounted on the user's helmet. The digital video camera 10 and the viewer/controller 510 are initially paired by a wireless data connection and the content Connect Mobile App is active as described above with reference to fig. 31. Initiating the control/viewer application command causes a fast picture transfer data request signal to be sent to the data transfer enabled digital video camera 10, the digital video camera 10 responding by enabling rapid successive (e.g., 5 pictures per second) shots of the scene pointed to by the camera lens 26. The installation activity sequence 520 shown in FIG. 33 represents the following user activities: installing a digital video camera 10 on the helmet; presenting a mounting position (mounting position); and adjusting the position and angle of the digital video camera 10 by selecting the mounting surface location on the helmet and rotating the rail plug 132 within the base mount 130h of the mounting system 300. The angle/position mounting adjustments performed by the user enable rapid successive shots of scenes and their transmission to the user viewing the display screen of the viewer/controller 510 for display in near real time. Successive iterations of angle/position mounting adjustments, rapid successive photographs, and user observation of the displayed scene continue until the user is satisfied with the position of the displayed scene, at which point the mounting position adjustment of the digital video camera 10 on the helmet is completed.

Frame optimization may be accomplished by means of a remote control device, or within the digital video camera 10 (if equipped with a screen and controller). Frame optimization may involve one or both of illumination and color optimization and frame alignment, whether manual or automatic.

Fig. 34 is a flowchart showing a preferred manual illuminance and color setting adjustment procedure performed by the user after completion of the installation position adjustment described above with reference to fig. 33. The manual illuminance and color setting program shown in fig. 34 is different from the installation position adjustment program of fig. 33 in that: 1) installation activity order 520 not applicable; 2) a setup decision (OK) decision block replaces the position decision block in the viewer/controller 510; and 3) manual angle/position mount adjustment results in the fast continuous shooting scene being replaced by transmission of new setting instructions generated in response to user manipulation of the change illumination and color setting actuators associated with the viewer/controller 510. The manual illumination and color adjustment procedure involves the user viewing successive photographs on the display screen and manipulating the alternate illumination and color setting actuators associated with the viewer/controller 510 until the user is satisfied with the displayed illumination and color, whereupon the manual setting adjustment is completed.

Automatic lighting and color optimization uses video or photo analysis for controlling the device. Fig. 35 is a flowchart showing a preferred automatic illuminance and color setting adjustment procedure performed by a user after completion of the installation position adjustment as described above with reference to fig. 33. The automatic illumination and color setting routine shown in fig. 35 differs from the manual illumination and color setting routine shown in fig. 34 in that the automatic adjustment iteration loop replaces the setting determination decision block of fig. 33. Specifically, the start auto-adjustment process block initiates an iterative auto-adjustment loop of programmed analysis of photo color, illumination, and position, followed by a quality optimization decision query based on a set of programmed quality criteria. The auto-adjustment loop iteratively performs the analysis and causes new setup instructions to be transmitted to the digital video camera 10 to take additional pictures for display and analysis by the viewer/controller 510. The automatic illumination and color adjustment procedure involves automatic internal analysis of the picture on the display screen and preprogrammed automatic adjustment of illumination and color settings until the quality optimization decision block indicates that the image quality meets the preprogrammed optimal quality criteria, and by user manipulation of the actuator, indicates that the automatic setting adjustment is complete, whereby the final quality optimization decision block indicates that the user is satisfied. The viewer/controller 510 may implement adjustment algorithms to analyze the frames, adjust settings, and re-analyze the frames to optimize the illumination and color settings. Small and fine alignment adjustments can be made automatically by altering the pixels used to define the frame. These adjustments may be made by redefining the center pixel or by redefining the bounding box. These adjustments may be horizontal, vertical, and rotational, including a full 180 rotation to allow the digital video camera 10 to be flipped set up, as shown in fig. 25D. For more precise optimization, the digital video camera 10 may be pointed at a predefined chart to allow automatic adjustment to achieve more precise and consistent settings.

The use of many-to-many nature of wireless technology enables a user to control multiple cameras. Multi-camera control allows the controller to coordinate the illumination and color settings on all cameras, provide guidance for camera position alignment, and synchronize video on multiple cameras with synchronized start/stop frames or synchronized "On Screen Display (OSD) frames or audio sounds that can be embedded in the video to facilitate editing and post-processing. The use of a wireless connection allows one camera to provide a synchronization signal to another camera so that the video can be synchronized in post-processing. OSD frames may be stored in advance in the memory of the digital video camera 10 and simply triggered by a frame synchronization pulse to limit transmission bandwidth requirements and any associated errors or delays. This synchronization may include information such as the video file name and the camera identity of the master camera. To improve the accuracy of the synchronization timing, the wireless transfer rate may be calibrated by sending a echo message (ping) to the secondary device and listening for a response. To further improve accuracy, this send-back information/response cycle is repeated multiple times.

A separate remote device may be used to pair two cameras, where neither camera has a screen. Fig. 36 shows a (master) camera 1 and a (slave) camera 2 of the same type as the digital video camera 10, aimed at a common diagram 530. The opposing camera mounts can be adjusted to align the images in the Z-axis. The illumination and color settings may be adjusted so that they match. Aligning the images and adjusting the illumination and color settings eliminates the need for post-processing when combining video from multiple cameras at multiple angles or three-dimensional views. Fig. 36 shows an iPhone paired to cameras 1 and 2 implementing remote start/stop capability, as described below. The master camera 1 sends an OSD frame sync pulse to the slave camera 2. The master camera 1 analyzes the pictures from the slave camera 2 and adjusts the settings to match the alignment and settings of the master camera 1.

Fig. 36 presents two views of the display 532 of an iPhone-type viewer/controller 510 showing side-by-side images produced by cameras 1 and 2 viewing chart 530 for viewing by a user. The upper diagram 534 and the lower diagram 536 show the contrast relationship between the position and the color matching before and after the correction, respectively. The diagram 534 shows that the two camera images are misaligned in the Z-axis and color unbalanced, and the diagram 536 shows that the corrected images are aligned in position and color matched.

By controlling multiple cameras, the user is able to coordinate shots from different angles and ensure that the color and lighting settings are similar to allow seamless switching during playback. The preferred embodiments can be extended such that where there are multiple devices daisy-chained together, they can use a single authentication. For example, if there are two cameras connected to a device requiring such authentication, the signal from one camera may be routed through the other camera to use its security, and the intermediate device will be the only device required to provide such security. This security component can also be a stand-alone component that is simply inserted into the security path as a pass-through that adds only the authentication or approval required by the receiving device and performs any transformations required for the response to be interpreted correctly.

Fig. 37 illustrates an exemplary user application that allows a user to change lighting and color settings and immediately view the resulting changed video. Fig. 37 is a flow chart showing camera 1 and iOS mobile handset or tablet device 510 paired by wireless connection, which cooperate to accomplish a pass-through of camera 2 data without security. The user pushes the wireless connection actuator on the camera 2 to send a pairing connection request signal to the enabled camera 2, the camera 2 detects the request, confirms the pairing and sends a signal to the camera 2 to complete the pairing. The camera 2 responds: take pictures in rapid succession and send them to camera 1 along with the status information for direct transmission to device 510 for display on display screen 532 as camera 2 images and data. The user manipulates actuators associated with the device 510 to change the illumination and color settings by: causing a new setting instruction signal to be sent to camera 1 for direct transmission to camera 2, camera 2 responding by changing its lighting and color settings.

Data acquisition and data synchronization in use of wireless communication, preferably in conjunction with GPS capability, may be accomplished by one of a number of techniques. When video is captured during an activity, the data can be used to better describe the activity and for editing and optimization during recording or post-processing. Typically, these data will be embedded in the video as user data, or in a file as a track of data (according to the MPEG specification). In a first alternative, data may be written to a text track (text track) in a file. These data are ignored by the player unless the text display is turned on. Post-processing algorithms extract these data for analysis. Typically, a text track survives editing. In a second alternative, the data may be written to a separate file and the filename for the data may be written as metadata on the video file so that the post-processing application can correctly associate the data with the video. The data is optimally synchronized with the video, but they need not be frame synchronized. In the case where the data is stored in a separate file, the time stamp may be used to synchronize the video. This flag may be embedded in the data file to mark the file at a single time (e.g., start, middle, end or after designation by the user), mark the file with each video frame, or mark periodically.

Fig. 38 shows a hybrid flowchart and pictorial view of an iPhone viewer/controller 510 paired to cameras 1 and 2 by a wireless data and control command connection, which implements a remote start/stop capability for multiple cameras. (cameras 1 and 2 are also identified by respective reference numerals 101 and 102 to indicate that they are of the same type as the digital video camera 10.) the flow diagram shows an iPhone viewer/controller 510 paired with cameras 1 and 2 and a contact Connect Mobile App in its active mode of operation. A pictorial view of the iPhone viewer/controller 510 shows the start recording actuator on its display screen 532.

A user wishing to start a recording session clicks (tap) on the start recording actuator to send a start recording command signal to the enabled cameras 1 and 2. The flow chart shows that in response to the start recording command signal, the video cameras 1 and 2 record video data. The wireless module 400 in each camera 1 and 2 is configured to respond to the start recording command signal regardless of the off state of the switch activator of the cameras 1 and 2.

A user who wants to complete a recording session clicks a stop recording actuator (not shown in fig. 38) on the display screen 532 to send a stop recording command signal to the cameras 1 and 2. The flowchart shows that the video cameras 1 and 2 stop video recording in response to the stop recording command signal.

Fig. 38 also shows an upper timing diagram and a lower timing diagram showing the timing sequence of video frame acquisitions by cameras 1 and 2 in the following two situations, respectively: manually starting cameras 1 and 2 asynchronously in response to a user positioning switch activator 80; cameras 1 and 2 are activated almost simultaneously in response to the user clicking on the activation recording actuator on display screen 532 of iPhone controller/viewer 510. The lower timing diagram illustrates the benefits of a wireless connection in achieving nearly simultaneous acquisition of video data streams from multiple cameras.

Fig. 39 is a flowchart showing an example of pairing cameras 1 and 2 through the viewer/controller 510 or the controller 510 'by wireless data and control command connection, the controller 510' being shown in fig. 39. Fig. 39 shows a camera 1 paired by a wireless connection to a controller 510' and a contact Connect Mobile App in its active mode of operation. The user presses the wireless connection activator button on the camera 2 to turn on its module 400, and the module 400 sends a pairing (connection) request signal to the camera 1. The camera 1, which has been paired with the controller 510 ', detects the pairing request signal and transmits the camera pairing request signal to the controller 510'. The controller 510' presents a pairing request to the user, which manipulates the actuator to reject the requested pairing connection, thereby stopping the pairing process, or which manipulates the actuator to receive the requested pairing connection, thereby sending and transmitting a confirmation pairing signal through the camera 1 to the camera 2 to complete the pairing connection.

A synchronous calibration sequence 540 performed between cameras 1 and 2 calibrates the transmission delay between them. The camera 1 sends a synchronization calibration signal to the camera 2 and the camera 2 responds to the synchronization calibration signal by sending a synchronization response signal. The camera 1 determines a calibration delay representing the amount of delay from the transmission of the synchronization calibration signal to the reception of the synchronization response signal. This process is repeated a number of times until the continuously measured calibration delay is within operational tolerances.

After the synchronous calibration sequence 540 is completed, a synchronous video recording process 542 is initiated. The camera 1 operates as a master camera and sends a start recording signal to the camera 2 in response to a trigger signal controlled by a user, the camera 2 responding by starting to record video data. After the calibration delay determined by the synchronous calibration sequence 540 has expired, camera 1 begins recording video data to achieve a synchronous start of recording video data by cameras 1 and 2.

An on-screen display ("OSD") sync pulse insertion process 544 facilitates video frame synchronization in video and audio post-processing. In response to the camera 1 starting video data recording, the camera 1 sends a trigger OSD synchronization signal to the camera 2. The camera 2 responds to the trigger OSD sync signal by inserting an OSD sync pulse superimposed on the stream of video frames captured by the camera 2. After the expiration of the calibration delay determined by the synchronization calibration sequence 540, the camera 1 inserts an OSD sync pulse overlay into the stream of video frames captured by the camera 1. The time base for calculating the calibration delay and OSD sync pulse insertion is preferably provided by a GPS date/time clock available to the GPS receiver 458.

Video and audio post-processing routine 546 involves performing a search on the video frame stream of OSD sync pulses and shifting the timing of the video frame stream of camera 2 to match the OSD sync pulses of camera 1. The frame center, color, audio volume, and other parameters of the camera video and audio data are adjusted using OSD sync pulses so that the video and audio data streams can be combined for multi-angle shots, three-dimensional images, or other effects.

Fig. 40 is a block diagram showing a post-processing procedure of synchronizing audio data generated by the wireless microphone 550 and the wired microphone 90 included in the digital video camera 10. Audio data generated by the microphone 90 is compressed by an audio codec 552. The wireless microphone 550 produces audio signals that are received by the wireless module 400, converted to digital form by an analog-to-digital converter 554, and compressed by an audio codec 556. The video data generated by the image sensor 18 is compressed by a video codec 558, the video codec 558 being located in the main processor 500 of the digital video camera 10. Hard-wired audio data of audio 1 track, wireless audio data of audio 2 track, and video data of video track, which are supplied from the outputs of the audio codec 552, the audio codec 556, and the video codec 558, respectively, are combined and contained as parallel tracks in the original video file 560, and are stored in the SD memory card 562.

Wireless microphone 550 introduces a delay in the audio 2 track. Fig. 40 illustrates this delay by showing a single frame time offset between corresponding frames of audio 1 track and audio 2 track. The OSD sync pulse described above serves as an audio time stamp that can be used to correct the delay, thereby synchronizing the audio 1 track and the audio 2 track for automatic post-processing audio analysis. Post-processing is performed in the peripheral computer 570, the peripheral computer 570 including a video editor 572, the video editor 572 having an audio track extraction module 574, the audio track extraction module 574 receiving stored video track data, audio 1 track data, and audio 2 track data from the original video file 560 from the SD card 562. The audio track extraction module 574 separates the audio 1 track and the audio 2 track, synchronizing them with the audio synchronizer module 576 using time stamp synchronization pulses. The synchronized audio 1 and audio 2 tracks are combined with the video track in the video/audio combiner module 578 and rendered to the new video file 580 in the correct temporal frame alignment.

The data measurements performed depend on the type of data collected. The most suitable data varies according to the recorded physical activity or type of activity; thus, it is desirable that the data sensors be tuned to the relevant physical activity. Furthermore, the optimal location of the measurement data is often not the ideal location for mounting the camera.

FIG. 41 is a simplified block diagram illustrating the processing of data from a single track of a data source. Fig. 41 shows a digital video camera 10 including a video file 600 in its main processor 500, the video file 600 containing a video track, an audio track and a text track. The video track and the audio track correspond to the video track and the audio 1 track, respectively, contained in the original video file 560 of fig. 40. The text track represents data generated by a subtitle generator 602 hard-wired to the main processor 500 and rendered for display on a video frame.

Multiple data sources can be recorded by the camera by using a camera with many-to-many connections. These data sources may be customized to a particular application, such as a car race, and data relating to the car engine may be collected from the on-board diagnostics and sent to the digital video camera 10, where the data may be embedded in a video stream for later playback. Examples of multiple data sources include streaming data from one or more data sources to one or more cameras (e.g., GPS data from a telephone or GPS collection device, and audio data from a remote microphone) and storing these data as a single file or embedded in a video file as metadata, video tracks, or text.

In post-processing, data associated with the video content can be used for editing to correct shading/lighting variations, correct video processing errors, and add information about the path taken, video position, speed, and other information in the video segment (store). Position and time data embedded in the video from sources such as GPS can be used to synchronize the video in the post-processing that generates the three-dimensional video. Speed, vibration, altitude, temperature, data and location may be combined to determine possible athletic movements or activities as part of a post-processing program combination. The suggestions may be adjusted based on data collected from a large number of videos for which activity in the videos has been identified. Data associated with video content may be used to associate and group videos from one or more users. The grouping may be based on any characteristic, such as time, location, speed, and other factors. Videos that intersect in time or location may be linked such that when two videos intersect in location or time, the viewer may transition to a different camera or video. Further, the data may be used to associate multiple cameras or videos to create multiple perspectives for the same location or event. These data can also be used to correlate videos of the same location taken over time to record changes at that location over an extended duration (hours, days, weeks, years).

Multiple "language" tracks on a video file may be used to capture different audio sources (including wireless microphones) from a video camera. This allows the user to select from the best audio sources in post-processing or to automatically correct signal errors and synchronization problems. By storing multiple sources, the user performs post-processing algorithms and can select the most reliable track if signal quality problems due to using the wireless device result in a loss of signal (dropout). In addition, audio may be captured from multiple sources and from different locations to provide different audio information so that preferred audio may be selected in post-processing. In the event that multiple audio tracks are not available, the data tracks may be used and the data may be converted to audio sources in post-processing. In the event that the wireless audio source cannot be directed through the audio codec, the raw data may be stored and post-processing may modify these data to convert them to audio. Any delay introduced by the wireless connection can be corrected by synchronizing the wireless audio source to the primary audio source (the internal microphone) using the audio waveform.

the foregoing method is different from the prior art of automatically switching between an internal microphone and an external microphone where the external microphone is used when the external microphone is present and the software automatically reverts to the internal microphone when the external microphone signal is unavailable. However, automatic switching will mix audio from different locations and does not provide a seamless audio experience.

FIG. 42 is a simplified block diagram illustrating the processing of data from multiple tracks of multiple data sources. Fig. 42 shows the digital video camera 10, which includes in its main processor 500 a video file 610, the video file 610 containing video tracks and audio tracks corresponding to those contained in the video file 600 of fig. 41, and five text tracks as described below.

The data processing and computation module 612 of the main processor 500 receives data from the GPS receiver 458, the camera sensor 614, the wireless module 400 that receives data transmissions from wireless connection enabled sources, and the wired data module 614 and submits these data as text track 1, text track 2, text track 3, text track 4, and text track 5, respectively.

The text track 1 contains GPS data such as longitude, latitude, altitude, date/time, and other data available to the GPS receiver 458. The date/time information can associate the captured video and other data (including data on text tracks 2-5) to a point in time in the video data stream. The peripheral computer 57 acquires the time-stamped information and displays it at a point in time. The transmission delay calibration described with reference to fig. 39 may be implemented using a GPS-provided date/time clock as a time standard.

The text track 2 contains operational parameter data such as video resolution, compression rate, and frame rate information available to a camera sensor 614 associated with the digital video camera 10.

The text tracks 3 and 4 contain data collected from wireless connection enabled data a and data B transmission sources, such as racing car engine sensor data and racer heart rate monitoring data. These data are typically sent to the module 400 periodically. Another example of data a and data B sources is a data source that transmits data at different transmission rates.

Text track 5 contains data generated from a text data module (e.g., caption generator 602 of fig. 41) that is hardwired to data processing and computation module 612.

It will be obvious to those having skill in the art that many changes may be made to the details of the above-described embodiments without departing from the underlying principles of the invention. For example, one skilled in the art will appreciate that the subject matter of any sentence or paragraph may be combined with the subject matter of some or all of the other sentences or paragraphs, except where such combinations are mutually exclusive. The scope of the invention should, therefore, be determined only by the following claims.

62页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图像记录装置、图像记录方法以及记录介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类