Sound-based dynamic light control method, device, system and storage medium

文档序号:327280 发布日期:2021-11-30 浏览:20次 中文

阅读说明:本技术 基于声音的灯光动态控制方法、设备、系统及存储介质 (Sound-based dynamic light control method, device, system and storage medium ) 是由 杨伟展 魏彬 朱奕光 谢姜 张良良 曾滔滔 于 2021-07-20 设计创作,主要内容包括:本发明公开了一种基于声音的灯光动态控制方法,包括:获取声音数据;将声音数据转换为频谱信号,并对频谱信号分区域进行颜色处理以生成目标基色值;将声音数据转换为亮度值;根据亮度值及目标基色值生成色彩控制参数;将色彩控制参数按顺序输出至灯光组的各像素点中,以使灯光组的灯光呈流水式动态显示。本发明还公开了一种计算机设备、系统及一种计算机可读存储介质。本发明将声音与颜色及亮度结合起来,形成颜色及亮度随声音同时变化的效果,可有效反映声音的特殊性,营造良好的氛围,提升用户的体验效果。(The invention discloses a light dynamic control method based on sound, which comprises the following steps: acquiring sound data; converting the sound data into a spectrum signal, and performing color processing on the spectrum signal in regions to generate a target base color value; converting the sound data into a brightness value; generating color control parameters according to the brightness value and the target base color value; and outputting the color control parameters to each pixel point of the light group in sequence so as to enable the light of the light group to be dynamically displayed in a flowing mode. The invention also discloses computer equipment, a system and a computer readable storage medium. The invention combines the sound with the color and the brightness to form the effect that the color and the brightness change along with the sound at the same time, can effectively reflect the particularity of the sound, creates a good atmosphere and improves the experience effect of users.)

1. A dynamic light control method based on sound is characterized by comprising the following steps:

acquiring sound data;

converting the sound data into spectrum signals, and performing color processing on the spectrum signals in different regions to generate target base color values;

converting the sound data into a brightness value;

generating a color control parameter according to the brightness value and the target base color value;

and outputting the color control parameters to each pixel point of the light group in sequence so as to enable the light of the light group to be dynamically displayed in a flowing mode.

2. The method according to claim 1, wherein the step of performing color processing on the spectral signal to generate the target base color value comprises:

dividing the frequency spectrum signal into at least two primary color regions and calculating a primary color value of each primary color region, wherein the primary color regions correspond to the primary color values one to one;

carrying out color transition treatment on each basic color value;

respectively carrying out color correction processing on each basic color value after color transition processing;

and respectively carrying out color compensation processing on each basic color value after the color correction processing.

3. The method of claim 2, wherein the step of performing color processing on the spectral signal to generate the target base color value according to the sub-regions when the spectral signal is divided into at least three base color regions further comprises: and performing color highlighting treatment on each basic color value after the color compensation treatment.

4. A method as claimed in claim 2 or 3, wherein the step of dividing the spectral signal into at least two primary color regions and calculating the primary color values for each primary color region comprises:

dividing the spectrum signal into at least two primary color regions according to a frequency value;

and respectively carrying out summation processing on the frequency corresponding values in each basic color area to generate basic color values.

5. A method as claimed in claim 2 or 3, wherein the step of performing color transition processing on each of the base color values respectively comprises:

extracting a current basic color value and N corresponding historical basic color values, wherein the current basic color value is a basic color value of the sound data obtained by calculation at the current moment, the N historical basic color values are basic color values of the sound data obtained by calculation at the previous N times of the current moment, one current basic color value corresponds to the N historical basic color values, and N is a positive integer;

and calculating the average value of the current basic color value and the corresponding N historical basic color values, and taking the average value as the basic color value after color transition processing.

6. A method as claimed in claim 2 or 3, wherein the step of performing color correction processing on each base color value after color transition processing comprises:

and multiplying the primary color values after the color transition treatment by corresponding preset correction coefficients for color correction, wherein the preset correction coefficients correspond to the primary color values after the color transition treatment one by one.

7. A method as claimed in claim 2 or 3, wherein the step of performing color compensation processing on each color value after color correction processing comprises:

detecting whether the primary color value after color correction is lost or not within a preset time;

counting the detection times and the loss times of the base color values after the color correction processing respectively;

and calculating to obtain the primary color value after the color compensation processing according to the detection times, the loss times and the primary color value after the color correction processing.

8. The method as claimed in claim 7, wherein the step of detecting whether the color correction processed primary color values are lost within a predetermined time comprises:

comparing the color-corrected base color value with a preset compensation value within a preset time, judging whether the color-corrected base color value is smaller than the preset compensation value,

if yes, the color correction process indicates that the primary color value is lost,

if not, the primary color value after the color correction is not lost;

the step of calculating and obtaining the primary color value after the color compensation processing according to the detection times, the loss times and the primary color value after the color correction processing comprises the following steps:

according to the formula RGBi=RGBj·RGBcPerforming color compensation, wherein RGBiFor the primary color values, RGB, after color compensationjFor the primary color values, RGB, after color correctionc=1+(RGBs/S)·K,RGBcTo compensate for the coefficients, RGBsAnd the number of times of loss is S the number of times of detection, and K is a preset proportion.

9. The method according to claim 3, wherein the step of performing color highlighting on each of the color compensation processed base color values comprises:

extracting the minimum primary color value from all the primary color values after color compensation processing;

subtracting the minimum basic color value from each basic color value after color compensation processing to generate a reference basic color value, wherein the reference basic color value corresponds to the basic color value after color compensation processing one to one;

extracting a maximum reference base color value from all the reference base color values;

when the maximum reference primary color value is not zero, according to formula RGBo=D·[(RGBiVmin)/Vmax8 performs a non-linear operation for color highlighting, where RGCo is the color-highlighted primary color value, D is the preset maximum intensity value, RGBiIs the primary color value, V, after color compensation processingminIs the minimum base color value, VmaxThe base color value after the color highlighting processing corresponds to the base color value after the color compensation processing one by one;

when the maximum reference base color value is zero, according to formula RGBoColor highlighting was performed as D.

10. The method of claim 1, wherein the step of converting the voice data into a brightness value comprises:

calculating a sound average value of the sound data;

processing the sound average value by adopting an automatic gain algorithm to obtain a reference average value of the current moment;

and generating a brightness value according to the sound average value and the reference average value of the current moment.

11. The method as claimed in claim 10, wherein the step of processing the sound average value by using an automatic gain algorithm to obtain a reference average value of the current time comprises:

calculating a reference average value of the current time based on the sound average value, a previous reference average value, and a previous count value, the previous reference average value being a reference average value of sound data acquired immediately before the current time, the previous count value being a count value of sound data acquired immediately before the current time, wherein,

when the sound average value is larger than the last reference average value, the sound average value is used as the reference average value and the maximum reference sound value of the current time, and the counting value of the current time is reset,

when the sound average value is less than or equal to the last reference average value and the last count value are both greater than zero, the sound average value is calculated according to the formula Aagc_o=Vbase·(Vcount_0/T) calculating a reference average value of the current time, wherein Aagc_oIs a reference average value, V, of the current timebaseIs the maximum reference sound value, Vcount_0=Vcount_i-1,Vcount_0Is the count value of the current time, Vcount_iT is the number of times of acquiring the sound data within a preset time,

otherwise, setting the reference average value of the current moment to be zero;

the step of generating a brightness value according to the sound average value and the reference average value of the current time includes:

when the reference average value of the current moment is not zero, according to the formula Vbright=D·(AavgAagc _ o)8 performs a non-linear operation to generate a brightness value, wherein Vbright is the brightness value, Aavg is the sound average value, and D is a preset maximum intensity value;

when the reference average value of the current moment is zero, the reference average value is calculated according to the formula VbrightLuminance values are generated at 0.

12. The method of claim 1, wherein the step of generating color control parameters based on the luminance values and the target base color values comprises:

according to the formula RGBK=RGBq·(Vbright/D) respectively calculating the color control factor corresponding to each target base color value, wherein RGBKAs a color control factor, RGBqIs a target base color value, VbrightThe brightness value is D, the preset maximum intensity value is D, and the color control factors correspond to the target base color values one by one;

all color control factors are combined to form color control parameters.

13. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1-12.

14. A light dynamics control system, comprising a lighting device and a computer device according to claim 13, said computer device being connected to said lighting device.

15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 12.

Technical Field

The invention relates to the technical field of light control, in particular to a sound-based light dynamic control method, computer equipment, a light dynamic control system and a computer-readable storage medium.

Background

With the improvement of living standard, people increasingly demand entertainment activities, such as concerts, large stage dramas, music exhibition halls, music fountain squares and the like, stage art workers mostly adopt colorful lighting effects to enhance performance effects and increase artistic appeal.

However, the light effect is all built through the decorative lamp who has fixed effect to current light effect, and the light effect that this kind of mode showed often the kind is fixed, the colour is single and the variation mode is fixed, can not match with external sound, and the situation that shows is limited, and recreational is not strong.

For example, patent CN104244529A discloses a sound-sensing beat lighting system, which can individually design special effect labels corresponding to time nodes for each piece of music according to the beat of a specific piece of music, and activate the corresponding special effect labels according to the time nodes while playing the music, so as to control the display combination distribution, brightness and color of a lighting display unit; when the specific prop sound is detected, the overall brightness of the light special effect can be increased; when a specific voice is detected, a set of light special effect flow can be activated according to the preset. However, the system can only process preset music, the preset music needs to be subjected to label setting and special effect setting in advance, the light special effect form is fixed, the real-time light effect adjustment cannot be performed according to all sounds, the flexibility is weak, and the effect is single.

Disclosure of Invention

The technical problem to be solved by the present invention is to provide a sound-based dynamic light control method, a computer device, a system and a computer-readable storage medium, which can combine sound with color and brightness to form an effect that the color and brightness of light change simultaneously with the sound.

In order to solve the technical problem, the invention provides a dynamic light control method based on sound, which comprises the following steps: acquiring sound data; converting the sound data into spectrum signals, and performing color processing on the spectrum signals in different regions to generate target base color values; converting the sound data into a brightness value; generating a color control parameter according to the brightness value and the target base color value; and outputting the color control parameters to each pixel point of the light group in sequence so as to enable the light of the light group to be dynamically displayed in a flowing mode.

As an improvement of the above solution, the step of performing color processing on the spectral signal in different regions to generate target base color values includes: dividing the frequency spectrum signal into at least two primary color regions and calculating a primary color value of each primary color region, wherein the primary color regions correspond to the primary color values one to one; carrying out color transition treatment on each basic color value; respectively carrying out color correction processing on each basic color value after color transition processing; and respectively carrying out color compensation processing on each basic color value after the color correction processing.

As an improvement of the above scheme, when the spectral signal is divided into at least three primary color regions, the step of performing color processing on the spectral signal in regions to generate target primary color values further includes: and performing color highlighting treatment on each basic color value after the color compensation treatment.

As an improvement of the above solution, the step of dividing the spectrum signal into at least two primary color regions and calculating a primary color value of each primary color region includes: dividing the spectrum signal into at least two primary color regions according to a frequency value; and respectively carrying out summation processing on the frequency corresponding values in each basic color area to generate basic color values.

As an improvement of the above solution, the step of performing color transition processing on each of the base color values respectively includes: extracting a current basic color value and N corresponding historical basic color values, wherein the current basic color value is a basic color value of the sound data obtained by calculation at the current moment, the N historical basic color values are basic color values of the sound data obtained by calculation at the previous N times of the current moment, one current basic color value corresponds to the N historical basic color values, and N is a positive integer; and calculating the average value of the current basic color value and the corresponding N historical basic color values, and taking the average value as the basic color value after color transition processing.

As an improvement of the above, the step of performing color correction processing on each base color value after color transition processing includes: and multiplying the primary color values after the color transition treatment by corresponding preset correction coefficients for color correction, wherein the preset correction coefficients correspond to the primary color values after the color transition treatment one by one.

As an improvement of the above, the step of performing color compensation processing on each base color value after color correction processing includes: detecting whether the primary color value after color correction is lost or not within a preset time; counting the detection times and the loss times of the base color values after the color correction processing respectively; and calculating to obtain the primary color value after the color compensation processing according to the detection times, the loss times and the primary color value after the color correction processing.

As an improvement of the above scheme, the step of detecting whether the color correction processed primary color values are lost within the preset time includes: comparing the primary color value after the color correction with a preset compensation value within a preset time, judging whether the primary color value after the color correction is smaller than the preset compensation value, if so, indicating that the primary color value after the color correction is lost, and if not, indicating that the primary color value after the color correction is not lost; the step of calculating and obtaining the primary color value after the color compensation processing according to the detection times, the loss times and the primary color value after the color correction processing comprises the following steps: according to the formula RGBi=RGBj·RGBcPerforming color compensation, wherein RGBiFor the primary color values, RGB, after color compensationjFor the primary color values, RGB, after color correctionc=1+(RGBs/S)·K,RGBcTo compensate for the coefficients, RGBsAnd the number of times of loss is S the number of times of detection, and K is a preset proportion.

As an improvement of the above solution, the step of performing color highlighting on each base color value after color compensation processing includes: extracting the minimum primary color value from all the primary color values after color compensation processing; subtracting the minimum basic color value from each basic color value after color compensation processing to generate a reference basic color value, wherein the reference basic color value corresponds to the basic color value after the color compensation processing one by one; extracting a maximum reference base color value from all the reference base color values; when the maximum reference primary color value is not zero, according to formula RGBo=D·[(RGBi-Vmin)/Vmax]8Performing a non-linear operation for color highlighting, wherein RGBoIs the color-highlighting processed primary color value, D is the preset maximum intensity value, RGBiIs the primary color value, V, after color compensation processingminIs the minimum base color value, VmaxThe base color value after the color highlighting processing corresponds to the base color value after the color compensation processing one by one; when the maximum reference base color value is zero, according to formula RGBoColor highlighting was performed as D.

As an improvement of the above, the step of converting the sound data into the luminance value includes: calculating a sound average value of the sound data; processing the sound average value by adopting an automatic gain algorithm to obtain a reference average value of the current moment; and generating a brightness value according to the sound average value and the reference average value of the current moment.

As an improvement of the above scheme, the step of processing the sound average value by using an automatic gain algorithm to obtain a reference average value of the current time includes: calculating a reference average value of the current time according to the sound average value, a previous reference average value and a previous count value, wherein the previous reference average value is the reference average value of the sound data acquired last time of the current time, and the previous count value is the count value of the sound data acquired last time of the current time, and when the sound average value is the previous average valueWhen the sound average value is larger than the last reference average value, the sound average value is used as the reference average value and the maximum reference sound value of the current moment, the count value of the current moment is reset, and when the sound average value is smaller than or equal to the last reference average value and the last count value are both larger than zero, the sound average value is used according to a formula Aagc_o=Vbase·(Vcount_0/T) calculating a reference average value of the current time, wherein Aagc_oIs a reference average value, V, of the current timebaseIs the maximum reference sound value, Vcount_0=Vcount_i-1,Vcount_0Is the count value of the current time, Vcount_iSetting the count value as the last count value, setting T as the acquisition times of the sound data in the preset time, and otherwise, setting the reference average value of the current moment as zero; the step of generating a brightness value according to the sound average value and the reference average value of the current time includes: when the reference average value of the current moment is not zero, according to the formula Vbright=D·(Aavg/Aagc_o)8Performing a non-linear operation to generate a luminance value, wherein VbrightIs a brightness value, AavgThe sound average value is obtained, and D is a preset maximum intensity value; when the reference average value of the current moment is zero, the reference average value is calculated according to the formula VbrightLuminance values are generated at 0.

As an improvement of the above solution, the step of generating the color control parameter according to the luminance value and the target base color value includes: according to the formula RGBK=RGBq·(Vbright/D) respectively calculating the color control factor corresponding to each target base color value, wherein RGBKAs a color control factor, RGBqIs a target base color value, VbrightThe brightness value is D, the preset maximum intensity value is D, and the color control factors correspond to the target base color values one by one; all color control factors are combined to form color control parameters.

Correspondingly, the invention also provides computer equipment which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the dynamic light control method when executing the computer program.

Correspondingly, the invention also provides a light dynamic control system, which comprises lighting equipment and the computer equipment, wherein the computer equipment is connected with the lighting equipment.

Accordingly, the present invention also provides a computer readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the steps of the above-mentioned light dynamic control method.

The implementation of the invention has the following beneficial effects:

according to the method, the color conversion processing of the sound in the areas is carried out according to the sound frequency spectrum, so that the color matched with the sound is generated; meanwhile, the invention also carries out brightness conversion according to the sound, thereby generating the brightness matched with the sound; therefore, the sound, the color and the brightness are combined, the effect that the color and the brightness change along with the sound synchronously is formed, the particularity of the sound can be effectively reflected, a good atmosphere is created, and the experience effect of a user is improved.

Furthermore, the invention adopts processing methods such as color transition, color correction, color compensation, color highlighting and the like to process the frequency spectrum signal, thereby ensuring that the color change is softer, the transition is richer and the conversion of RGB color can be better matched.

In addition, the invention also adds an automatic gain algorithm and a nonlinear algorithm, so that the point pixels in the light group can run rhythmically according to the sound, and the light group can keep basically consistent running effect no matter how much the volume of the music is set.

Drawings

FIG. 1 is a flow chart of a first embodiment of a method for dynamic sound-based light control according to the present invention;

FIG. 2 is a flow chart of a second embodiment of the dynamic light control method based on sound of the present invention;

fig. 3 is a flow chart of a third embodiment of the dynamic light control method based on sound according to the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.

Referring to fig. 1, fig. 1 shows a first embodiment of the sound-based dynamic light control method of the present invention, which includes:

s101, sound data is acquired.

The collecting device collects point sound information at regular time, combines the point sound information collected in a preset time period into sound data at regular time and sends the sound data to the computer device, and the computer device processes the sound data after obtaining the sound data. That is, the sound data includes a plurality of point sound information, wherein the point sound information is an analog signal.

In order to enable the light of the light group to present a good flowing water type dynamic display effect, the invention can acquire the point sound information once every 92us and every 256 point sound information (A)1、A2…A256) The input of the sound data is performed once (i.e., the input of the sound data is performed once every 23.5 ms), but not limited thereto, and may be adjusted according to the actual situation.

S102, converting the sound data into spectrum signals, and performing color processing on the spectrum signals in different regions to generate target base color values.

Specifically, a fast fourier transform algorithm may be employed to convert the sound data into a spectral signal for processing. In practical application, the acquired 256-point sound information is input into a fast Fourier transform algorithm to output 128 frequency points to form a frequency spectrum signal, and the frequency points correspond to the frequencies of 0-5.435 KHz respectively.

Meanwhile, according to the frequency characteristics of the spectrum signals, the spectrum signals are subjected to regional processing, and then color conversion is performed in a regional mode, wherein each region can correspond to a target base color value. For example, when divided into three regions, the three primary colors of red (R), green (G) and blue (B) can be converted correspondingly; for another example, when divided into two regions, the two primary colors red (R) and blue (B) can be converted correspondingly.

S103, converting the sound data into a brightness value.

Accordingly, the present invention converts the sound data into the brightness with respect to the characteristics of the sound data. Specifically, the step of converting the sound data into a luminance value includes:

(1) the sound average of the sound data is calculated.

When the sound data includes 256 point sound information, it can be according to formula aavg=(A1+A2+…+A256) /256 calculating the Sound mean A of the Sound dataavgThereby providing a more average sound reference value for the sound data.

(2) And processing the sound average value by adopting an Automatic Gain Control (AGC) algorithm to obtain a reference average value of the current moment.

Specifically, the reference average value of the current time is calculated according to the sound average value, the previous reference average value, and the previous count value, where the previous reference average value is the reference average value of the sound data acquired immediately before the current time, and the previous count value is the count value of the sound data acquired immediately before the current time, where:

when the sound average value is larger than the last reference average value, taking the sound average value as the reference average value and the maximum reference sound value of the current moment, and resetting the count value of the current moment;

when the sound average value is less than or equal to the last reference average value and the last count value are both greater than zero, the method is based on the formula Aagc_o=Vbase·(Vcount_0/T) calculating a reference average value of the current time, wherein Aagc_oIs a reference average value, V, of the current timebaseIs the maximum reference sound value, Vcount_0=Vcount_i-1,Vcount_0Is the count value of the current time, Vcount_iThe count value is the last count value, and T is the acquisition frequency of the sound data in the preset time;

otherwise, setting the reference average value of the current moment to be zero.

That is to say: when A isavg>Aagc_iWhen setting Aagc_o=AavgAnd resetting Vbase=AavgReset Vcount_0T; when A isavg≤Aagc_i,Aagc_i> 0 and Vcount_iWhen > 0, keep VbaseIs unchanged and is set to Aagc_o=Vbase·(Vcount_i-1)/T; in other cases, Aagc_o=0。

Wherein:

Aavgis the sound average;

Aagc_ithe last reference average value is 0 at the initial value;

Aagc_othe reference average value of the current moment;

Vbaseis the maximum reference sound value;

Vcount_iis the last count value;

t is the number of times of acquiring the sound data within a preset time, and preferably, the value of T is 200 (the number of times of acquiring the sound data within about 5S), but not limited thereto, and may be adjusted according to actual situations.

(3) And generating a brightness value according to the sound average value and the reference average value of the current time.

In particular, when the reference average value of the current time is not zero, it is determined according to formula Vbright=D·(Aavg/Aagc_o)8Performing a non-linear operation to generate a luminance value, wherein VbrightIs a brightness value, AavgIs the average value of sound, D is the preset maximum intensity value, correspondingly, when A isagc_oWhen equal to 0, Vbright=0。

Step (3) of averaging A of the sounds in step (1)avgAnd the reference average value A in the step (2)agc_oNonlinear operation processing is performed to generate a luminance value of a specific range.

For example, when using the RGB three primary colors, the luminance range may be set to [0, 255], where the value of D corresponds to 255. Accordingly, when the calculated luminance value is 0, it means that the luminance value is 0%, and when the calculated luminance value is 255, it means that the luminance value is 100%.

Therefore, the brightness corresponding to the sound data can be effectively calculated through step S103, and a brightness value according with the sound characteristics is formed.

And S104, generating color control parameters according to the brightness value and the target base color value.

It should be noted that, a set of color control parameters may be generated corresponding to one piece of sound data, each set of color control parameters includes at least two color control factors, and the color control factors are used to represent brightness and color, where the number of the color control factors is consistent with the number of partitions of the spectrum signal, one partition corresponds to a color in one color control factor, and the brightness of the color control factors in the same set of color control parameters is consistent.

For example, when the three regions are divided, the three regions can be correspondingly converted into three primary colors of red (R), green (G), and blue (B), and in this case, the color control parameters include three color control factors, which are: r factor (luminance a + color R), G factor (luminance a + color G), B factor (luminance a + color B).

Specifically, the step of generating the color control parameter according to the luminance value and the target base color value includes:

(1) according to the formula RGBK=RGBq·(Vbrightand/D) respectively calculating the color control factor corresponding to each target base color value.

Note that, RGBKAs a color control factor, RGBqIs a target base color value, VbrightAnd D is a preset maximum intensity value, and the color control factors correspond to the target base color values one by one. Accordingly, when the RGB three primaries are employed, the preset maximum intensity value may be set to 255.

(2) All color control factors are combined to form color control parameters.

For example, when the color image is divided into three regions, the three regions can be correspondingly converted into three primary colors of red (R), green (G) and blue (B), and the color control parameters include three color control factors, which are R, respectivelyK、GKAnd BKWherein R isK=Rq·(Vbright/255),GK=Gq·(Vbright/255),BK=Bq·(Vbright/255); accordingly, the color control parameter is (R)K,GK,BK)。

Therefore, step S104 fuses the color part and the brightness part of the sound data to form a unique color control parameter.

And S105, outputting the color control parameters to each pixel point of the light group in sequence so as to enable the light of the light group to be dynamically displayed in a flowing mode.

For example, when the light sets are strips, each strip is sequentially (e.g., from left to right) provided with a plurality of pixels (e.g., LED1 … …, LED 100). In the control process, the color control parameters are output to the 1 st pixel point LED1 of the lamp strip, then are output to the 2 nd pixel point LED2, and so on, and are output to the 100 th pixel point LED100 finally, and the flowing water effect of continuous movement of light is formed finally.

Correspondingly, in the process of continuously acquiring the sound data, the color control parameters are continuously output to the light group, so that the flowing water effect of the light group changing along with the sound is formed.

Therefore, different from the prior art, the invention combines the frequency, the color and the brightness of the sound to form the effect of simultaneously changing the color, the brightness and the sound, has strong flexibility, can effectively reflect the specificity of the sound, creates good atmosphere and improves the experience effect of users.

Referring to fig. 2, fig. 2 shows a second embodiment of the dynamic light control method based on sound according to the present invention, which comprises:

s201, sound data is acquired.

The sound data includes a plurality of point sound information, wherein the point sound information is an analog signal.

S202, converting the sound data into a spectrum signal.

Specifically, a fast fourier transform algorithm may be employed to convert the sound data into a spectral signal for processing.

S203, dividing the spectrum signal into at least two primary color regions and calculating a primary color value of each primary color region.

In the present embodiment, the spectrum signal is divided into three regions.

It should be noted that the primary color regions correspond to primary color values one to one. Specifically, the step of dividing the spectral signal into at least two primary color regions and calculating the primary color value of each primary color region includes:

(1) the spectral signal is divided into at least two primary color regions according to the frequency values.

For example, when the frequency point range of the spectrum signal is 0 to 5.435KHz, the extracted spectrum signal can be divided into 3 primary color regions (0 to 1KHz, 1K to 2.5KHz, 2.5K to 5.435KHz), where the primary color region 0 to 1KHz is used for converting into red (R) in the three primary colors, the primary color region 1K to 2.5KHz is used for converting into green (G) in the three primary colors, and the primary color region 2.5K to 5.435KHz is used for converting into blue (B) in the three primary colors.

(2) And respectively carrying out summation processing on the frequency corresponding values in each basic color area to generate basic color values.

By summing the frequency-corresponding values in each of the primary color regions (i.e., the values corresponding to the frequencies of the abscissa in the spectrogram), a more comprehensive value can be generated as the corresponding primary color value. Correspondingly, red (R) in the three primary colors can be generated by summing corresponding values of the frequencies in the primary color regions 0-1 KHz, green (G) in the three primary colors can be generated by summing corresponding values of the frequencies in the primary color regions 1K-2.5 KHz, and blue (B) in the three primary colors can be generated by summing corresponding values of the frequencies in the primary color regions 2.5K-5.435 KHz, so that the initial extraction of the colors can be realized by the method.

And S204, performing color transition processing on each basic color value.

Specifically, the step of performing color transition processing on each base color value respectively comprises the following steps:

(1) and extracting the current basic color value and the corresponding N historical basic color values.

It should be noted that the current base color value is a base color value of the audio data obtained by calculation at the current time, and the N history base color values are base color values of the audio data obtained by calculation N times before the current time, where one current base color value corresponds to the N history base color values, and N is a positive integer.

For example, for red in three primary colors, the current red base color value and the corresponding N historical red base color values need to be extracted; for the blue color in the three primary colors, the current blue basic color value and the corresponding N historical blue basic color values need to be extracted.

(2) And calculating the average value of the current basic color value and the corresponding N historical basic color values, and taking the average value as the basic color value after color transition processing.

Preferably, N is 2, and after calculating the average value of the three base color values and the N historical base color values corresponding to the three base color values, three base color values after color transition processing can be obtained: re=(Ra+Ra-1+Ra-2)/3,Ge=(Ga+Ga-1+Ga-2)/3,Be=(Ba+Ba-1+Ba-2) /3 wherein Ra、Ga、BaAs the current base color value, Ra-1、Ra-2Is RaCorresponding historical base color value, Ga-1、Ga-2Is GaCorresponding historical base color value, Ba-1+Ba-2Is BaThe corresponding historical base color value.

Therefore, by respectively carrying out sliding buffer processing on each basic color value, the change among all pixel points in the lamplight group can be softer.

And S205, respectively carrying out color correction processing on each basic color value after the color transition processing.

Specifically, the step of performing color correction processing on each base color value after color transition processing includes: and multiplying the primary color values after the color transition processing by corresponding preset correction coefficients for color correction, wherein the preset correction coefficients correspond to the primary color values after the color transition processing one by one.

For example, R after color transition treatment can be usede、Ge、BeAre respectively multiplied by a correction coefficient Rd、Gd、Bd(i.e., R)j=Re·Rd,Gj=Ge·Gd、Bj=Be·Bd),In general, R isd>Gd>Bd. Preferably, the correction coefficients used by the present invention are: rd=6,Gd=5,BdThe value is 4, but not limited to this, and may be adjusted according to the actual situation.

Therefore, by multiplying each of the color transition-processed primary color values by a correction coefficient, the data of the three primary color regions can be made more indicative of the RGB colors.

And S206, performing color compensation processing on each basic color value after color correction processing.

Specifically, the step of performing color compensation processing on each base color value after color correction processing includes:

(1) and detecting whether the primary color value after the color correction processing is lost or not within a preset time.

The corresponding checking method is as follows:

comparing the color-corrected base color value with a preset compensation value within a preset time, judging whether the color-corrected base color value is smaller than the preset compensation value,

if yes, the color correction process indicates that the primary color value is lost,

if not, the primary color value after the color correction is not lost;

(2) the number of detections and the number of losses of the base color values after the color correction processing are counted, respectively.

(3) And calculating to obtain the primary color value after the color compensation processing according to the detection times, the loss times and the primary color value after the color correction processing.

In particular, according to the formula RGBi=RGBj·RGBcPerforming color compensation, wherein RGBiFor the primary color values, RGB, after color compensationjFor the primary color values, RGB, after color correctionc=1+(RGBs/S)·K,RGBcTo compensate for the coefficients, RGBsThe number of times of loss is S, the number of times of detection is S, and K is a preset proportion.

For example, when setting the presetM seconds before, M8, K20%, and the three base color values R after color correction processing are determined respectivelyj、Gj、BjWhether the value is smaller than a preset compensation value; if yes, judging that the data is lost; if not, the data is judged to be not lost. Then, by calculating the number of losses R of the first 8 secondss、Gs、BsAnd the detection times S of the first 8 seconds to obtain a compensation coefficient Rc=1+(Rs/S)·20%、Gc=1+(Gs/S)·20%、Bc=1+(Bs20% of/S), and finally calculating the color value R after passing the compensationi=Rj·Rc、Gi=Gj·Gc、Bi=Bj·Bc

Preferably, the compensation coefficient is in the range of 1-1.2, but not limited thereto, and can be adjusted according to actual conditions.

Therefore, each base color value after color correction is respectively subjected to color compensation processing through a color loss compensation algorithm, so that colors of R, G, G, B, R and B are richer in critical time.

And S207, performing color highlighting processing on each basic color value after the color compensation processing.

Specifically, the step of performing color highlighting processing on each base color value after color compensation processing includes:

(1) extracting the minimum primary color value from all the primary color values after color compensation processing;

(2) subtracting the minimum basic color value from each basic color value after color compensation processing to generate a reference basic color value, wherein the reference basic color value corresponds to the basic color value after the color compensation processing one by one;

(3) extracting a maximum reference base color value from all the reference base color values;

(4) when the maximum reference primary color value is not zero, according to formula RGBo=D·[(RGBiVmin)/Vmax8 performs a non-linear operation for color highlighting, where RGCo is the color-highlighted primary color value, D is the preset maximum intensity value, RGBiIs the primary color value, V, after color compensation processingminIs the minimum base color value, VmaxThe base color value after the color highlighting processing corresponds to the base color value after the color compensation processing one by one. Accordingly, when VmaxWhen equal to 0, RGBoD. Preferably, D has a value of 255, but not limited thereto.

In this embodiment, the base color value after the color highlighting process is used as the target base color value.

Specifically, the three base color values R generated in step S206i、Gi、BiIn the method, the minimum base color value is selected as Vmin(ii) a Make Ri、Gi、BiThree values are subtracted by V respectivelyminThen, the maximum reference base color value is selected as Vmax(ii) a Then, nonlinear operation is executed, and finally, the value is converted into the range of 0-255, namely Ro=255·[(Ri-Vmin)/Vmax8、Go=255(Gi-Vmin)/Vmax8、Bo=255(Bi-Vmin)/Vmax8。

For example, R generated in step S206i=150、Gi=50、Bi145, the smallest base color value Vmin=Gi50, maximum reference base color value Vmax=Ri-VminAt this time, R can be calculated according to the formulao=255,Go=0,Bo=169。

Therefore, the invention can make the color output mostly be three colors of red, green and blue by performing color highlighting processing on each basic color value respectively, and the color change is more obvious.

S208, the sound data is converted into a brightness value.

S209, generating color control parameters according to the brightness value and the target base color value.

And S210, outputting the color control parameters to each pixel point of the light group in sequence so as to enable the light of the light group to be dynamically displayed in a flowing mode.

The process of steps S208 to S210 in this embodiment may specifically refer to steps S103 to S105 in the first embodiment, which is not described herein again.

It can be seen from the above that the present invention combines the sound frequency and the color, so as to realize the red light brightness at the low frequency, the green light brightness at the medium frequency, and the blue light brightness at the high frequency, and at the same time, other colors (colors mixed by RGB with different brightness) will appear when the color transition/frequency is critical, so that the color is richer.

In addition, the invention also adds an automatic gain algorithm and a nonlinear algorithm, so that the point pixels in the light group can run rhythmically according to the sound, and the light group can keep basically consistent running effect no matter how much the volume of the music is set.

Referring to fig. 3, fig. 3 shows a third embodiment of the dynamic light control method based on sound according to the present invention, which includes:

s301, sound data is acquired.

S302, converting the sound data into a spectrum signal.

S303, dividing the spectrum signal into at least two primary color regions and calculating a primary color value of each primary color region.

In the present embodiment, the spectral signal is divided into two primary color regions.

And S304, performing color transition processing on each basic color value.

S305, color correction processing is performed on each base color value after color transition processing.

And S306, respectively carrying out color compensation processing on each basic color value after the color correction processing.

In this embodiment, the primary color value after the color compensation process is used as the target primary color value.

S307, the sound data is converted into a luminance value.

And S308, generating color control parameters according to the brightness value and the target base color value.

And S309, outputting the color control parameters to each pixel point of the light group in sequence so as to enable the light of the light group to be dynamically displayed in a flowing mode.

Unlike the second embodiment shown in fig. 2, in this embodiment, it is not necessary to perform color highlighting on each base color value after color compensation processing.

It should be noted that, when the spectral signal is divided into two primary color regions for color processing, the primary color value after color compensation processing can be used as the target primary color value, and color highlighting processing is no longer required.

Correspondingly, the invention also discloses computer equipment which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the dynamic light control method when executing the computer program. Meanwhile, the invention also discloses a lamplight dynamic control system, which comprises lighting equipment and the computer equipment, wherein the computer equipment is connected with the lighting equipment; it should be noted that, the computer device and the lighting device may be connected wirelessly or by wire; preferably, the lighting device of the present invention is a light strip, but not limited thereto, and the type of the lighting device may be selected according to actual use conditions. In addition, the invention also discloses a computer readable storage medium, on which a computer program is stored, wherein the computer program realizes the steps of the light dynamic control method when being executed by a processor.

While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种用于记忆植物补光灯的控制电路

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!