System and method for selecting training subjects

文档序号:835673 发布日期:2021-03-30 浏览:8次 中文

阅读说明:本技术 用于选择训练对象的系统和方法 (System and method for selecting training subjects ) 是由 阿里·瓦赫达特 瓦希德·帕托维·尼亚 于 2019-08-15 设计创作,主要内容包括:本发明描述用于训练机器学习(ML)模型以预测多信道放大器设备的目标信道的增益的方法和系统。可使用训练对象的现有集合预先训练ML模型。接着可利用训练后ML模型来建议待标记的另外的有用的训练对象,这将通过在给定信道输入的接通/断开值的情况下更精确地预测目标信道增益来改进ML模型的性能。(Methods and systems for training a Machine Learning (ML) model to predict the gain of a target channel of a multi-channel amplifier device are described. The ML model may be pre-trained using an existing set of training subjects. The trained ML model can then be utilized to suggest additional useful training objects to be labeled, which will improve the performance of the ML model by more accurately predicting the target channel gain given the on/off values of the channel inputs.)

1. A method, comprising:

training an ML model using a set of labeled training objects and a Machine Learning (ML) algorithm to generate an ML model for predicting a gain associated with a target channel of a plurality of channels of an amplifier device, each labeled training object of the set of labeled training objects comprising an indication of a channel load of the amplifier device and an indication of a gain of the target channel;

receiving a plurality of unlabeled training objects, each unlabeled training object comprising an indication of a channel load of the amplifier device;

determining additional labeled training objects from the unlabeled training objects for further training the generated ML model, the determining comprising:

determining a variance model based on the type of the generated ML model and the set of unlabeled training objects;

selecting a candidate training object from the plurality of unlabeled training objects based on a maximum value of the variance model;

receiving a measured gain value for the target channel based on the channel load indicated by the candidate training object;

generating the additionally labeled training objects based on the candidate training objects and the gain values;

adding the additional labeled training objects to the set of labeled training objects; and

further training the generated ML model using the set of labeled training objects including the additional labeled training objects.

2. The method of claim 1, wherein training comprises:

receiving the set of labeled training objects;

determining a list of features of a set of training objects corresponding to the label;

sorting the features in the feature list, thereby generating a sorted feature list;

augmenting the labeled training objects to include a subset of the features in the ordered list of features; and

training the ML model using the augmented labeled training objects and the ML algorithm to generate the ML model.

3. The method of claim 2, wherein the subset of features comprises a number of highest ranked features in the ordered list.

4. The method of claim 3, further comprising determining the number of highest ranked features by determining a combination of features in the ranked list of features that optimizes a performance metric of the generated ML model.

5. The method of any of claims 1-4, wherein the generated ML model comprises a linear model.

6. The method of claim 5, further comprising determining a closed form solution for the variance model and determining the maximum value for the variance model based on the closed form solution for the variance model.

7. The method of any one of claims 1 to 6, further comprising:

determining a plurality of predicted gain values corresponding to the candidate training objects; and

determining a variance of the plurality of predicted gain values.

8. The method of claim 7, wherein selecting the candidate training object from the plurality of unlabeled training objects comprises: selecting the candidate training object based on the variance of the plurality of predicted gain values.

9. The method of any one of claims 1 to 8, further comprising: displaying the candidate training objects.

10. The method of claim 9, further comprising: displaying an indication of the target channel.

11. A method of generating additional training objects for a generated Machine Learning (ML) model for training a gain of a target channel of a plurality of channels of an amplifier device, the generated ML model having been learned using a set of labeled training objects, each labeled training object of the set of training objects comprising an indication of a channel load of the amplifier device and an indication of a gain of the target channel, the method comprising:

receiving a plurality of unlabeled training objects, each unlabeled training object comprising an indication of a channel load of the amplifier device;

determining additional labeled training objects from the unlabeled training objects for further training the generated ML model, the determining comprising:

determining a variance model based on the type of the generated ML model and the set of unlabeled training objects;

determining a plurality of channel loads for the variance model;

determining a plurality of variance parameters using the variance model and the plurality of channel loads, wherein each variance parameter of the plurality of variance parameters corresponds to one channel load of the plurality of channel loads;

selecting one of the plurality of channel loads based on the plurality of variance parameters;

determining candidate training objects from the plurality of unlabeled training objects based on the selected channel load;

receiving a measured gain value for the target channel based on a channel load indicated by the candidate training object;

generating an additional labeled training object based on the candidate training objects and the gain value; and

further training the generated ML model using the set of labeled training objects including the additional labeled training objects.

12. The method of claim 11, wherein training comprises:

receiving the set of labeled training objects;

determining a list of features of a set of training objects corresponding to the label;

removing one or more features in the feature list;

generating a sorted list of features by sorting the features in the list of features; and

generating the ML model using a number of highest-ranked features in the sorted list.

13. The method of claim 12, further comprising determining the number of the highest ranked features by determining a combination of features in the sorted list of features that optimizes a performance metric of the generated ML model.

14. The method of any of claims 11-13, wherein the candidate training objects indicate a channel load that maximizes the variance model.

15. The method of any of claims 11-14, further comprising sending the candidate training objects to a device for measuring one or more gain values of the amplifier device.

16. The method of any of claims 11 to 15, wherein the generated ML model comprises a quadratic model.

17. The method of any of claims 11 to 16, wherein the generated ML model comprises one or more tree models.

18. The method of any of claims 11-17, wherein determining the plurality of channel loads comprises determining a set of all possible channel loads for the variance model.

19. The method according to any of claims 11-18, wherein determining the plurality of variance parameters comprises, for each channel load of the plurality of channel loads:

determining a plurality of predicted gain values corresponding to the respective channel loads; and

an empirical variance of the plurality of predicted gain values is determined.

20. The method of claim 19, wherein determining the plurality of predicted gain values comprises determining an output of each of a plurality of trees.

21. A computing system, comprising:

a processor; and

a memory storing computer readable instructions that, when executed by the processor, cause the processor to perform the method of any of claims 1 to 10.

22. A computer-readable medium storing instructions that, when executed by a computing system, cause the computing system to perform the method of any of claims 1 to 10.

23. A computing system, comprising:

a processor; and

a memory storing computer readable instructions that, when executed by the processor, cause the processor to perform the method of any of claims 11 to 20.

24. A computer-readable medium storing instructions that, when executed by a computing system, cause the computing system to perform the method of any of claims 11 to 20.

Technical Field

The present invention relates generally to the field of amplifier devices, and more particularly to systems and methods related to training a Machine Learning (ML) model for predicting the gain of a target channel of a multi-channel amplifier device.

Background

The multi-channel amplifier device may be used for amplifying signals, such as electrical or optical signals. The gain of a target channel of the multi-channel amplifier device may comprise a logarithmic difference between the power of the output signal at the output port associated with the target channel and the power of the input signal at the input port associated with the target channel. A model may be constructed to predict the gain of each channel of the multi-channel amplifier device. As signals are added to or removed from a channel, it may be useful to predict how the addition or removal of those signals will affect the gain of other channels of the multi-channel amplifier. The above model can be used to predict the effect of adding or removing signals in a channel.

Disclosure of Invention

The following summary is for illustrative purposes only and is not intended to limit or restrict the detailed description. The following summary merely presents various described aspects in a simplified form as a prelude to the more detailed description provided below.

A multi-channel amplifier device (hereinafter amplifier device) may be used to amplify a signal, such as an electrical or optical signal. For example, erbium doped fiber amplifiers may be used to amplify optical signals transmitted via optical fibers. The gain of each channel of the amplifier device is affected by the voltage or power of the signal not only at the corresponding channel input port, but also at one or more other input ports of the amplifier device. The particular combination of active and inactive channels of a multi-channel amplifier is referred to as the input combination or channel load. A model of the gain of the target channel of the amplifier device may be used to determine or predict the gain of the target channel (i.e., the selected channel of interest) of the amplifier device. The model of the gain of the target channel of the amplifier device may receive as inputs a power value (hereinafter referred to as a signal strength value) of the input port corresponding to the target channel and a voltage or power value of one or more other input ports of the amplifier device, and the model may output a corresponding predicted gain of the target channel.

A model of the gain of the target channel of the amplifier device may be constructed by testing each possible input combination of the amplifier device and measuring the resulting gain of the target channel of the amplifier device for each possible input combination. The voltage and/or current may be measured for each channel at the input and/or output ports of the amplifier device. The model of the gain may predict a voltage gain and/or a current gain of the target channel based on the combination of inputs. The power of the input and/or output signal for each channel may be calculated from the measured voltage and current. The model of the gain may predict the power gain at the channel based on the input combination. Input combination refers to different ways in which the inputs of the amplifier device can be switched on or off. For example, if there are 10 channels on the amplifier device, then there is 210The different possible inputs are combined because in practice each input can be switched on or off (2 possibilities). However, when the amplifier device comprises many channels, it may be time consuming to measure the signal strength (current and/or voltage) at the input port and/or the output port of the amplifier device for each possible input combination of the amplifier device in order to determine the gain of the target channel. It may be preferred to learn a model of the gain of the target channel of the amplifier device without measuring the input and output signal strength of the amplifier device for each possible input combination.

A model that requires a number of training objects for learning the gain of a target channel of an amplifier device (e.g., a selected channel of the amplifier device) may be used. Each training object may be one or more measured gains in response to a particular combination of inputs. The training objects may be in the format of < x, y > pairs, where x is a vector of values for all or a subset of the input channels of the amplifier device, i.e., which channels are active and which channels are inactive, and y is the gain of the target channel when the channel load of x is applied to the amplifier device. A learned model of the target channel (hereinafter ML model) is used to predict the gain of the target channel given the input combination of the amplifier devices. The ML model for the gain of the target channel is the model that maps a given input combination to a gain value. The ML model can be used to predict the gain of the channel of the amplifier device given a combination of inputs. In this embodiment, the ML algorithm may use training objects to learn the ML model. In other embodiments, the ML model may be learned using a neural network (or a decision tree, or any other ML method) and a training object. A learned ML model (also referred to as a generated ML model) approximated by a neural network or a decision tree can be used to predict the gain of the target channel of the amplifier device given a combination of inputs to the amplifier device.

Various methods may be used to select training objects for labeling. For example, the training subject may be selected by the algorithm from a pool of all possible input combinations that have not been labeled. Random sampling, i.e. random selection of training objects, may be used. An active learning model may be used in which an initial set of labeled training objects are used to generate the ML model (e.g., the generated ML model), and then additional training objects are selected (from the generated ML model), labeled, and used to further refine the generated ML model (e.g., to refine the coefficients of the generated ML model). If done properly, the active learning model is more efficient than performing random sampling.

According to a broad aspect, there is provided a method comprising: training a Machine Learning (ML) model using a set of labeled training objects and an ML algorithm to generate an ML model for predicting a gain associated with a target channel of a plurality of channels of an amplifier device, each labeled training object of the set of labeled training objects comprising an indication of a channel load of the amplifier device and an indication of a gain of the target channel; receiving a plurality of unlabeled training objects, each unlabeled training object including an indication of a channel load of an amplifier device; determining additional labeled training objects from the unlabeled training objects for further training the generated ML model, the determining comprising: determining a variance model based on the type of the generated ML model and a set of unlabeled training objects; selecting a candidate training object from the plurality of unlabeled training objects based on a maximum value of the variance model; receiving a measured gain value for a target channel based on a channel load indicated by a candidate training object; and generating an additional labeled training object based on the candidate training object and the gain value; and adding the additionally labeled training objects to the set of labeled training objects. The method also includes further training the generated ML model using a set of labeled training objects including additional labeled training objects.

According to another broad aspect, there is provided a method of generating an additional training object for use in a generated Machine Learning (ML) model for training a gain of a target channel of a plurality of channels of an amplifier device, the generated ML model having been learned using a set of labeled training objects, each labeled training object of the set of training objects comprising an indication of a channel load of the amplifier device and an indication of the gain of the target channel. The method comprises the following steps: receiving a plurality of unlabeled training objects, each unlabeled training object including an indication of a channel load of an amplifier device; additional labeled training objects are determined from the unlabeled training objects for further training of the generated ML model. The determining includes: determining a variance model based on the type of the generated ML model and a set of unlabeled training objects; determining a plurality of channel loads for the variance model; determining a plurality of variance parameters using a variance model and a plurality of channel loads, wherein each variance parameter of the plurality of variance parameters corresponds to one channel load of a plurality of channel loads; selecting one of a plurality of channel loads based on a plurality of variance parameters; determining candidate training objects from a plurality of unlabeled training objects based on the selected channel load; receiving a measured gain value for a target channel based on a channel load indicated by a candidate training object; and generating an additional labeled training object based on the candidate training object and the gain value. The method also includes further training the generated ML model using a set of labeled training objects including additional labeled training objects.

In some cases, there may be a budget on how many training objects may be labeled and/or a limit on the amount of time available for labeling. It may be preferred to select an additional training object that will provide the most accurate ML model of the target channel of the amplifier device, given budget and/or mark time constraints on the training objects. The additional training objects may be selected by an algorithm from a set of candidate training objects that have not been labeled.

The summary herein is not an exhaustive list of novel features described herein and does not limit the claims. These and other features are described in more detail below.

Drawings

These and other features, aspects, and advantages of the present disclosure will become better understood with regard to the following description, appended claims, and accompanying drawings. The present disclosure is illustrated by way of example and is not limited by the accompanying figures, in which like references indicate similar elements.

FIG. 1 is a block diagram of a computing system that may be used to implement apparatus and methods according to representative embodiments described herein.

Fig. 2 shows a schematic diagram of an amplifier device according to one or more illustrative aspects of the present disclosure.

Fig. 3 shows a simplified diagram of interaction between channels of an amplifier device according to one or more illustrative aspects of the present disclosure.

FIG. 4 shows a simplified diagram of training a Machine Learning (ML) model, according to one or more illustrative aspects of the present disclosure.

FIG. 5 is a flow diagram of a method for generating ML models and generating additional labeled training objects for training the generated ML models in accordance with one or more illustrative aspects of the present disclosure.

FIG. 6 is a flow diagram of a method for selecting features for generating an ML model in accordance with one or more illustrative aspects of the present disclosure.

Fig. 7 is a flow diagram of a method for ranking features in accordance with one or more illustrative aspects of the present disclosure.

FIG. 8 is a flow diagram of a method for selecting a model according to one or more illustrative aspects of the present disclosure.

Fig. 9 is a flow diagram of a method for determining additional training subjects based on variance in accordance with one or more illustrative aspects disclosed herein.

FIG. 10 is a flow diagram of a method for solving a differential model and selecting additional training objects from a pool in accordance with one or more illustrative aspects of the present disclosure.

Fig. 11 is a flow diagram of a method for solving a dissimilarity model and generating additional training objects according to one or more illustrative aspects disclosed herein.

Fig. 12 is a flow diagram of a method for iteratively determining a maximum variance and selecting additional training objects from a pool in accordance with one or more illustrative aspects of the present disclosure.

Fig. 13 is a flow diagram of a method for iteratively determining a maximum variance and generating additional training objects according to one or more illustrative aspects disclosed herein.

Fig. 14 is a flow diagram of a method for iteratively determining a maximum empirical variance and selecting additional training objects from a pool in accordance with one or more illustrative aspects disclosed herein.

Fig. 15 is a flow diagram of a method for iteratively determining a maximum empirical variance and generating additional training objects according to one or more illustrative aspects of the present disclosure.

Fig. 16 is a flow diagram of a method for generating a model of a multi-channel amplifier according to one or more illustrative aspects of the present disclosure.

Fig. 17 is a flow diagram of a method for determining additional channels to activate in accordance with one or more illustrative aspects of the present disclosure.

Fig. 18 is a flow diagram of a method for determining an optical signal-to-noise ratio according to one or more illustrative aspects of the present disclosure.

Detailed Description

In the following description of various illustrative embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration various embodiments in which the aspects disclosed herein may be practiced. It is to be understood that other embodiments may be utilized and structural or functional modifications may be made without departing from the scope of the present disclosure.

Amplifier devices may be used to amplify signals, such as electrical or optical signals. The amplifier device may comprise a repeater and/or may perform the function of a repeater. The amplifier arrangement may comprise an optical amplifier and/or a repeater, such as an Erbium-doped fibre amplifier (EDFA). For example, an optical amplifier may be coupled to an optical fiber, which may be referred to as a link, and amplify the power of an optical signal transmitted through the optical fiber. The amplifier arrangement may comprise a multi-channel amplifier in which there are a plurality of input and output ports and each channel has a corresponding input and output port.

The model of the amplifier device may represent various physical properties of the amplifier device. A Machine Learning (ML) algorithm and a training data set comprising training objects may be used to learn (or generate) a model of the gain of a given channel of a multi-channel amplifier. The ML algorithm may be a supervised learning ML algorithm, and the training objects may be labeled training objects. The learned or generated ML model may receive input values for an input port of a given channel and input values for one or more input ports of other channels of the amplifier device and output a predicted output (e.g., gain) of the given channel (e.g., a target channel) of the amplifier device. For a given channel, the learned or generated ML model may indicate the logarithmic difference between the signal strength value (e.g., power) of the signal at the input port of the channel and the predicted signal strength value (e.g., power) at the output port of the channel. This logarithmic difference between the input and output signal strength values may be referred to as the gain of the channel or the channel gain.

One "brute force" method of constructing or building a model of the gain of a given channel for a multi-channel amplifier would be to measure the output signal strength value at the output port of the amplifier for one input combination of the target channels of the amplifier, determine the gain of the channel based on the ratio of the output signal strength value to the input signal strength value of the given channel, and repeat these steps for all possible input combinations of the given channel. After using this "brute force" approach, the gain value for a given channel of the multi-channel amplifier may be determined by retrieving the corresponding output for the given channel for a particular combination of inputs (e.g., using a look-up table). However, it may be preferred, for example, that it is less time consuming to learn a model of the gain of a given channel of the amplifier device without measuring input and output signal strength values for each possible input combination of the given channel. In order to model the gain of a given channel of an amplifier without measuring the output signal strength value of the given channel for every possible input combination, various Machine Learning (ML) algorithms may be used. In some embodiments, an ML algorithm may be used to learn the function.

The ML model of the gain of the channel may be trained by providing an ML algorithm with a set of training objects to be learned. Training the ML model may result in an ML model that may be used to predict a gain value for a target channel of the amplifier given one or more input values to the amplifier (hereinafter referred to as the generated ML model). After the ML model is generated, additional training objects may be added to the set of training objects to form an updated set of training objects, which may then be provided to an ML algorithm used to further train the generated ML model, thereby generating an improved generated ML model.

Additional training objects may be determined using an active learning process, where the generated ML model is used to select the additional training objects. Active learning may provide a more efficient training object for the generated ML model than randomly selecting a training object. For example, an ML model with a desired level of accuracy may be trained using 1000 training objects selected by active learning, while an ML model using random sample learning may use 10,000 randomly selected training objects to achieve the same desired level of accuracy.

The optical link may be configured and/or used to transmit data. The optical link may include one or more amplifier devices. One or more ML models may be used in order to improve performance of lightpath setup, improve signal-to-noise ratio of links, predict performance of links, improve network optimization, more efficiently allocate resources, and/or for other purposes.

FIG. 1 is a block diagram of a computing system 100 that may be used to implement a method according to representative embodiments described herein. A particular computing system 100 may utilize all or only a subset of the components shown, and the level of integration may vary between different computing systems 100. Further, computing system 100 may contain multiple instances of a component, such as multiple processing units, processors, memories, transmitters, receivers, etc. Computing system 100 may be any type of suitable computing device, such as a physical computer, a server within a data center, or a virtual machine. Computing system 100 may include a Central Processing Unit (CPU) 114, a bus 120, and/or a memory 108, and optionally also a mass storage device 104, a video adapter 110, and/or an input/output (I/O) interface 112 (shown in dashed lines). Those skilled in the art will appreciate that CPU 114 represents processing capability. In some embodiments, computing system 100 may also include a special-purpose processing unit in place of, or in addition to, CPU 114. For example, computing system 100 may include a Graphics Processing Unit (GPU), a Tensor Processing Unit (TPU), a Neural Processing Unit (NPU), and/or may provide other so-called acceleration processors (or Processing accelerators) in addition to or in place of CPU 114.

The memory 108 may include any type of non-transitory system memory, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous DRAM (SDRAM), Read Only Memory (ROM), combinations thereof, or the like. For example, the memory 108 may include ROM for use at startup and DRAM for program and data storage for use in executing programs. The bus 120 may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus, and/or a video bus. The memory 108 may include the software 102. The software 102 may be retrieved from the memory 108 and executed by the CPU 114 and/or a processing accelerator (not illustrated), such as a GPU, TPU, NPU, or the like.

The mass storage device 104 may include any type of non-transitory storage device for storing data, programs, and other information and for making the data, programs, and other information accessible via the bus 120. The mass storage device 104 may include, for example, one or more of a solid state drive, a hard disk drive, a magnetic disk drive, and/or an optical disk drive.

The video adapter 110 and the I/O interface 112 may provide interfaces to couple external input and output devices to the computing system 100. Examples of input and output devices include a display 118 coupled to the video adapter 110 and an I/O device 116, such as a touch screen, coupled to the I/O interface 112. Other devices may be coupled to computing system 100 and additional or fewer interfaces may be utilized. For example, a serial interface such as a Universal Serial Bus (USB) (not shown) may be used to provide an interface for external devices.

Computing system 100 may also include one or more network interfaces 106, which may include at least one of a wired link, such as an ethernet cable, and/or a wireless link to access one or more networks 122. Network interface 106 may allow computing system 100 to communicate with remote entities via network 122. For example, the network interface 106 may provide wireless communication via one or more transmitters/transmit antennas and one or more receivers/receive antennas. The computing system 100 may communicate with a local or wide area network for data processing and communication with remote devices, such as other processing units, the internet, or remote storage facilities.

Fig. 2 illustrates a simplified diagram of a multi-channel amplifier device 210 (hereinafter amplifier device 210) disclosed according to one or more illustrative aspects of the present application. For the purposes of this example, amplifier device 210 is an optical amplifier, such as an Erbium Doped Fiber Amplifier (EDFA). In the example embodiment illustrated in fig. 2, the amplifier device 210 includes forty (40) channels. The amplifier device 210 may receive one or more signals x1-x40As input, and outputOne or more amplified signals y1-y40As described in further detail below. The input signal and the amplified output signal are optical signals. Although the multi-channel amplifier device 210 illustrated in fig. 2 is an optical amplifier, it will be appreciated that in other embodiments, the amplifier device 210 may be another type of amplifier device, such as an electrical amplifier in which the input signal and the amplified output signal are electrical signals.

The amplifier device 210 includes forty input ports 220-1-220-40 (commonly referred to as input ports 220(input ports 220) and collectively input ports 220(input ports 220)) and forty output ports 225-2-225-40 (commonly referred to as output ports 225(output ports 225) and collectively output ports 225(output ports 225)). Each channel of the amplifier device 210 includes a pair of input and output ports. For example, channel 1 includes an input port 220-1 and an output port 225-1. Each corresponding input port 220-1-220-40 may receive an input signal (x) via one of corresponding input cables (e.g., optical fibers) 230-1-230-40 (commonly referred to as input cables 230, and collectively as input cables 230) (input cables 230)1-x40). Each corresponding output port 225-1-225-40 can output one of the amplified signals via one of the corresponding output cables 240-1-240-40, commonly referred to as output cable 240(output cables 240), and collectively referred to as output cables 240(output cables 240). For each channel of the amplifier device 210, a gain may be calculated. The gain may be calculated by determining a logarithmic difference of a signal strength value of an output signal (e.g., power) at one of the output ports 225 and a signal strength value of a signal (e.g., power) at a corresponding one of the input ports 220. For example, the gain of the third channel of the amplifier including the input port 220-3 and the output port 225-3 may be represented by the following equation log (y)3)-log(x3) And (4) calculating. The gain of the third channel including the input port 220-3 and the output port 225-3 may be referred to as a channel gain of the third channel. E.g., log (y)40) And log (x)40) The difference between may be referred to as the gain of channel 40, or the gain across channel 40. The gain per channel may be in decibels (dB) unitAnd (6) measuring. Although the amplifier apparatus 210 is illustrated as having forty channels, the amplifier apparatus 210 may include any number of channels, such as 80, 96, etc.

Each channel including input port220 may be referred to as active or inactive. The predetermined threshold may be used to determine whether the input port220 of the channel is active. The signal strength of an input signal received at the input port220 via the input cable 230 may be measured and compared to a predetermined threshold to determine whether the input port220 is active or inactive. A given channel is defined as active when the signal strength at the input port220 of the given channel is above a threshold, and is defined as inactive when the signal strength at the input port220 of the given channel is below a threshold. An active channel may indicate that data is currently being transmitted through the input port corresponding to the channel.

The indication of which channels comprising the input port220 are active on the amplifier device 210 may be referred to as channel loading (also referred to as input combining). For example, [1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1,1, -1]Indicates all odd numbered channels (x) of the amplifier device 2101、x3、x5、…、x39) Is active and all even numbered channels (x) of the amplifier device 2102、x4、x6、…、x40) Is inactive. In other words, in this example, all odd-numbered input signals have signal strength values above a predetermined threshold, and all even-numbered signals have signal strength values below the predetermined threshold.

A plurality of amplifier devices 210 may be connected to each other, i.e. cascaded, for example at various intervals on a link. The link may comprise a long haul fiber optic connection. The model for each channel of the cascaded amplifier device 210 may be used to determine the optical signal-to-noise ratio over a long-haul fiber connection.

The gain of each channel may be affected by various factors, such as the channel load of the amplifier device 210, the age of the amplifier device 210, the current temperature of the amplifier device 210, and/or other factors. Manufacturing differences or other differences may cause each individual amplifier device 210 manufactured to provide a different gain when normalized by other factors such as temperature and channel loading. A model of the gain of one of the channels of the amplifier device 210 may be generated, where the model receives the channel load of the amplifier device 210 and outputs the gain of the one of the channels. For each amplifier device 210 manufactured, it may be preferable to generate a model specific to that amplifier device 210.

Fig. 3 shows a simplified diagram of the interaction between channels of an amplifier device 210 disclosed in accordance with one or more illustrative aspects of the present application. As illustrated in fig. 3, the gain of a channel may be affected not only by the input signal at the input port220 corresponding to the same channel, but also by the input signals at the input ports 220 of all or a subset of the other channels of the amplifier device 210. For example, as illustrated, the gain of a first channel including the input port 220-1 and the output port 225-1 may be affected not only by the input signal at the input port 220-1, but also by the input signals at the input ports 220-2-220-40 of all other channels of the amplifier device 210, such as the input ports (220-2-220-40).

The effect of each input port220 on the gain of each channel of the amplifier device 210 may be different for each amplifier device 210 manufactured. For example, for a first amplifier device 210, whether a channel including input port 220-3 is active may have a greater impact on the gain of a channel including output port 225-2, but for another amplifier device 210, whether a channel including input port 220-3 is active may have no impact on the gain of a channel including output port 225-2. A model may be trained for each channel of the manufactured amplifier devices 210 to model the gain of the one output port of interest for that particular amplifier device 210.

Fig. 4 illustrates a simplified diagram of a method for generating additional labeled training objects for training a generated ML model of a target channel of the amplifier device 210, according to one or more illustrative aspects of the present disclosure. The illustrated method 400 may train a generated ML model of a target channel of an amplifier device, such as amplifier device 210, by providing a generated ML model 420 of a training object 410 with labels that may be learned. The generated ML model may predict the gain of the target channel of the amplifier device 210 given a channel load (also referred to herein as an input combination. in some embodiments, the generated ML model may result in an improved generated ML model 420 when trained using labeled training objects.

The set of labeled training objects 410 may include a set of measured data corresponding to the amplifier device 210. Each tagged training object in the set of training objects 410 may include an input value of the input port220 of each channel of the amplifier device 210 and a corresponding measured gain of the target channel of the amplifier device 210. For example, for the amplifier device 210 illustrated in fig. 2, each training object 410 may take the format x for the generated ML model of the gain of channel 11、x2、...x40][y1]. The measured gain of the channel is in units of decibels (dB). The signal strength value (e.g., power) may be measured by a device for measuring the signal strength at the output port 225 of the amplifier device 210. The signal strength value (e.g., power) may be measured by a technician. Any input signal below the threshold maps to-1 and any input signal above the threshold maps to 1.

The input values of the labeled training subjects may have been converted from continuous input values (e.g., decibels) to binary values. A threshold value may be selected, and a value greater than the threshold value may be set to '1', and a value less than the threshold value may be set to '1'. For example, the threshold may be set to-36 dB, in which case an input value of-23 dB would be represented as '1' and an input value of-49 dB would be represented as '-1'.

The set of labeled training objects 410 may include any number of labeled training objects. The number of labeled training objects included in the set of labeled training objects 410 may be predetermined. For example, five thousand gain measurements may be employed, each having a different channel load, and thus the set of labeled training objects 410 may include five thousand labeled training objects. The number of labeled training objects included in the set of labeled training objects 410 may be determined based on the amount of time required for collecting the set of labeled training objects 410.

Given the channel load, the generated ML model 420 may be trained using the labeled training objects 410 to generate an improved generated ML model for predicting the gain of the target channel of the amplifier device 210. The generated ML model 420 may receive as input a channel load, where the channel load is an active or inactive value of all or a portion of the input ports.

In certain embodiments, it may be preferable to further optimize generated ML model 420 after initially training ML model 420 (e.g., generated ML model 420). The proposed method can operate in two scenarios; 1) where there is a set of unlabeled candidate training objects 430 (not used to train the generated ML model 420 so far) to select additional training objects 440 therefrom, and 2) where there are no unlabeled training objects available. In this scenario, the generated ML model 420 is used to generate additional training objects 440. The process of selecting/generating additional training objects is referred to as "active learning". In both scenarios, additional training objects selected/generated by ML model 420 are sent to a laboratory technician for tagging (i.e., gain measurement). For each additional training object 440, a label (i.e., a gain measurement) is added to the additional training object 440 to generate an additional labeled training object. Additional labeled training objects are then added to the set of labeled training objects 410 for further training the generated ML model 420 to generate an improved generated ML model 420.

For example, additional training objects 440 may be selected by an algorithm that calculates a variance based on the uncertainty level of the gain output predicted by the ML model. The channel load with the largest uncertainty for the generated ML model 420 may be selected as the additional training object 440. The variance may be used as a measure of uncertainty, and the channel load with the highest variance for the target channel gain may be selected as the additional training object 440. Various methods for selecting additional training objects 440 are described herein.

If there is a set 430 of unlabeled candidate training objects, an additional training object is selected 440 therefrom. The channel load corresponding to the unlabeled candidate training object 430 may be applied to the amplifier device 210 and the signal strength at the input and output ports of the target channel may be measured, for example, by a technician. The gain of the target channel is then determined by calculating the ratio of the signal strength value at the output port to the signal strength value at the input channel. Measuring the signal strength at the input and output ports of the target channel, determining the gain of the target channel given a particular channel load, and adding the determined gain to the unlabeled candidate training objects may be referred to as labeling the unlabeled candidate training objects 430. Unlabeled candidate training objects 430 may include additional training objects 440. One or more additional training objects 440 may be determined. After determining the one or more additional training objects 440, the additional training objects 440 may be labeled and added to the set of training objects 410 previously used to train the generated ML model 420. Generated ML model 420 may be further trained using the updated set of labeled training objects 410 to generate an improved generated ML model 420 (e.g., to further learn the coefficients of generated ML model 420). The resulting improved ML model 420 may then be used to predict the gain of the target channel of the amplifier device from the channel load. Iteratively, the generated ML model 420 may be further optimized by collecting additional training objects 440, labeling the additional training objects 440, and updating the set of labeled training objects 410, and further training the ML model 420 using the updated set of labeled training objects 410 (e.g., further training to learn coefficients of the generated ML model 420).

The number of additional training objects 440 selected/generated may be predetermined, and/or the number of times additional training objects 440 are collected may be predetermined. For example, one hundred additional training objects 440 may be collected each time the generated ML model 420 is further trained, and this process may be repeated ten times, thereby generating a total of one thousand additional training objects 440 that may be tagged and provided to the generated ML model 420 to generate an improved generated ML model 420. The number of additional training objects 440 collected and/or the number of times additional training objects 440 are collected may be determined based on satisfying a threshold (e.g., a threshold error rate of generated ML model 420).

Fig. 5 is a flow diagram of a method 500 for generating ML models and for generating additional labeled training objects for training generated ML model 420 in accordance with one or more illustrative aspects of the present disclosure. In one or more embodiments, method 500, or one or more steps thereof, may be performed by one or more computing devices or entities. For example, the method 500 may be performed by software for execution by the computing system 100. The software includes computer-executable instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Encoding of software to perform the method 500 is within the scope of one of ordinary skill in the art with respect to this application. Some steps or portions of steps of the method 500 illustrated in the flow chart may be omitted or changed in order. It should be emphasized that, unless otherwise specified, the method 500 shown in fig. 5 need not be performed in the exact sequence as shown; and as such, various steps may be performed in parallel rather than in sequence.

At step 505, a target channel of the selected amplifier device 210 may be received. The selection may include any channel of the amplifier device 210. Any target channel of interest of the amplifier device 210 may be selected. For example, if the amplifier device 210 includes 40 channels, any one of the 40 channels of the amplifier device 210 may therefore be selected as the target channel. The selection may be made by the user or automatically, for example in an iterative process in which there are multiple target channels of interest. In some embodiments, the algorithm may repeatedly perform step 505 of method 500 and 536 and iterate through each of the channels of amplifier 210 to obtain learned ML models (e.g., generated ML models) for each channel of amplifier device 210.

At step 508, a set of labeled training objects 410 may be received or generated. The number of labeled training objects included in the set of labeled training objects 410 may be predetermined and/or selected by a user.

At step 510, additional features corresponding to the selected target channel may be generated, which may be referred to as "feature generation. Additional features may be generated by algorithms and/or functions. The signal strength value of the input signal at each input port (i.e., channel input) may be referred to as a primary effect. For example, in FIG. 2, the main effect of the amplifier device 210 is x1-x40. The number of channels of the amplifier device 210 is the same as the number of main effects. For example, an amplifier device 210 comprising 40 channels has exactly 40 main effects.

The additional features may include features generated based on the primary effect. For each main effect xiAnd xjAnd 2-way interaction (or pair interaction) can be generated xixj. For example, up to 780 2 interactions may be generated for an amplifier device 210 comprising 40 channels. Any combination and any amount of additional features may be generated, such as third order interactions or higher. In some cases, only a portion of the additional features may be generated. For example, while up to 780 2 interactions may be generated for an amplifier device 210 comprising 40 channels, in some cases a smaller number of 2 interactions may be generated and/or used to model a target channel.

The number of additional features generated and/or the highest order of the additional features may be selected and/or predetermined. The number of additional features to be generated may be selected or predetermined based on the desired execution time of the method 500. Generating fewer additional features may allow method 500 or portions of method 500 to be performed more quickly.

Additional features may be generated for each labeled training object included in the set of labeled training objects 410. The generated additional features may be stored with the set of training objects 410, for example in a database or list. In some cases, no additional features may be generated, and the database or list may include only the primary effect.

At step 515, the number of additional features added to each labeled training object in the set of labeled training objects 410 for modeling the gain of the target channel (e.g., a generated ML model for learning the gain of the target channel) may be reduced. A feature list may be created that includes the primary effects and the additional features. Each of the main effects and additional features in the feature list is hereinafter referred to as a feature. At step 505, some of the primary effects and/or additional features may have minimal correlation with the gain of the target channel. At step 515, the main effects and additional features having minimal correlation with the gain of the target channel may be determined and removed from the feature list. At step 515, various methods may be used to select and remove features (e.g., primary effects or additional features) from the feature list. Method 600, described below and in fig. 6, is one example of a method that may be used to remove features from the feature list at step 515 to reduce the number of features included in the feature list.

Each feature (e.g., primary effect or additional feature) included in the list of features may be examined to determine a degree of influence of the feature (e.g., primary effect or additional feature) on the gain of the target channel. A performance metric may be determined for each of the features included in the feature list. The performance metric may include a correlation between the characteristic and the gain of the target channel. The labeled training objects included in the set of labeled training objects 410 may be used to calculate a performance metric for each feature included in the feature list. The features may then be removed from the feature list based on their performance metrics. A feature may be removed if its performance metric fails to meet a threshold, such as a predetermined threshold. The features included in the feature list may be sorted based on the performance metrics. All or a subset of the features in the feature list may be sorted. A predetermined number of features may be selected from the ranked features. The selected ranked features may be maintained in a feature list and other features may be removed from the feature list.

Features that were initially removed from the feature list may be added back to the list. If an additional feature, such as a 2-way interaction, is selected, but the primary effect that is a component of the additional feature has not yet been selected, the primary effect may be added back to the feature list.For example, if x2x30Included in the feature list, but removing x from the feature list30Then x may be followed30Added back to the feature list because it forms the selected feature x2x30The component (c).

Some features may not be inspected; rather, those features may be kept in the feature list without any examination. The features corresponding to the target channel (i.e., the primary effect of the target channel) and/or features adjacent to the target channel may be maintained in the feature list without examination. For example, if a channel corresponding to channel 28 is selected at step 505, the dominant effect x may be preserved without determining any performance metrics for those features28、x29And x30. It can be appreciated that the input of some of the input ports 230, e.g., the input port 230 corresponding to the target channel and the input port 230 of an adjacent channel, will have a relatively large impact on the gain of the target channel. Therefore, skipping the examination of the impact on those primary effects (i.e., input ports) may be more efficient, as those primary effects would be expected to remain in the feature list after the examination.

At step 520, the features retained after step 515 may be sorted. In some cases, step 515 may not be performed, in which case all features would be sorted at step 520. For example, if no additional features are generated at step 510, all major effects may be ranked at step 520. A ranking algorithm may be used to rank the features included in the feature list. The ranking algorithm may examine the set of labeled training objects 410 to perform the ranking. The features may be ranked based on a correlation between the target port and each of the features. An ordered list of features may be generated that includes the rank and each of the corresponding features. The sorted list of features may include features that are most relevant to least relevant based on how much each feature affects the gain of the target port according to a sorting algorithm.

The features may be sorted at the top of the list without performing a sorting algorithm with respect to certain features. The characteristics of the channels corresponding to the target port (e.g., the primary effects of the target channel), and/or the characteristics of the channels adjacent to the target port (e.g., the primary effects of the channels adjacent to the target port) may be maintained and/or sorted at the top of the list without examination. Knowledge of the amplifier device 210 being modeled and/or other domain knowledge may be used to determine which features are to be ranked without performing a ranking algorithm on those features. For example, if a value of a particular feature was previously found to have a greater impact on the output of the target port, the feature may be sorted at the top of the list without performing a sorting algorithm on the particular feature. At step 525, the ML model features used to generate the ML model may be updated by adding features to the ML model features from the sorted list of features. The sorted features may be added to the ML model features in the order in which they were sorted. For example, the ML model may be instructed first to generate the ML model 420 using the first four sorted features and then instructed to generate the ML model based on the first five sorted features, and the performance metrics computed for each of the two generated ML models may be compared to each other.

At step 530, the ML model may be generated using the updated ML model features. The ML model may be generated by using a predetermined number of the highest ranked features from the sorted list of features generated at step 520. For example, the ML model may be generated using the four highest ranked features in the sorted list of features.

The ML model may be trained using labeled training objects included in the set of labeled training objects 410, which may include the labeled training objects received at step 508. The ML model may be trained by providing labeled training objects from the set of labeled training objects 410 to a machine learning algorithm (e.g., a least squares algorithm) to be learned. Training may include manual feature selection.

ML model 420 may include one or more of the following: a linear function, a quadratic function, a tree-based set model, or any other suitable type of function. Trained ML model 420 may receive as input the channel load including the binary values of each of the features in the model and output the predicted gain for the target channel.

At step 535, performance metrics for the ML model may be computed. The performance metric of the ML model may include an indication of how predicted the ML model is to the gain of the target channel for the channel load associated with each labeled training object included in the set of labeled training objects 410, in other words, how well the ML model fits to the set of labeled training objects 410. The performance metric of the ML model may be measured using an algorithm such as a Bayesian Information Criterion (BIC) algorithm, an Akaike Information Criterion (AIC) or a Cross Validation (CV).

At step 536, a determination may be made as to whether the ML model is to be further trained.

The determination at step 536 may be made based on the computed performance metrics of the ML model. For example, the amount of error between the gain values predicted by the ML model and the actual gain values or any other performance metric of the ML model at step 535. If the performance metric fails to meet the threshold, the method 500 may return to step 525 to add another feature from the sorted list of features to the list of ML features used to generate the ML model. If the performance metric satisfies the threshold, the method 500 proceeds to step 537.

At step 537, the generated ML model 420 for the selected channel may be stored. The ML model 420 may be stored in a database. The generated ML model 420, for example, can be sent to a server and/or any other device. The generated ML model 420 may be associated with the amplifier device 210. The generated ML models 420 may be associated with other generated ML models 420 for the same amplifier device 210, for example, with generated ML models 420 for other target output ports (e.g., channels) of the amplifier device 210. The generated ML model 420 may include an indication associated with a particular target output port of interest.

At step 540, a variance model may be determined for the generated ML model 420. The variance model may output a variance amount or an estimated variance amount for each input combination of the generated ML model 420. In other words, for each set of inputs to generated ML model 420 (the dominant effects and additional features used in generating generated ML model 420), the variance model may output actual or estimated variances. Given a combination of inputs, the variance amount may indicate an amount of uncertainty in the output prediction gain of the generated ML model 420.

At step 545, one or more candidate training objects 430 may be selected using the variance model generated at step 540. A maximum value of the variance model may indicate a channel load (e.g., an input combination) at which the generated ML model 420 has the highest uncertainty for the predicted gain of the target output port (e.g., channel) based on the channel load (e.g., the input combination). Candidate training objects 430 may be determined based on the channel load (e.g., input combination) that maximizes the variance model. Candidate training object 430 may include a channel load (e.g., input combination).

The channel load (e.g., input combination) that maximizes the variance model may include input values for all or a portion of the input ports of the amplifier device 210. For example, for the amplifier device 210 that is a 40-channel amplifier device, the generated ML model 420 stored at step 537 may include five features corresponding to particular input values, and thus the variance model may include the five features. An acceptable value may be determined for an input port that is not in the channel load (e.g., input combination). The values of these other input ports may be selected randomly (all set inactive, all set active) or using any other method.

At step 550, candidate training objects 430 selected at step 545 are labeled, which are additional training objects 440. Additional training objects 440 may include channel loads (e.g., input combinations) of candidate training objects. Each additional training object 440 is labeled with the determined gain for the target channel corresponding to the channel load (e.g., input combination) of candidate training object 430.

The selected candidate training object 430, which is the additional training object 440, may be tagged by sending the selected candidate training object 430 to an apparatus for inputting a value to the amplifier apparatus 210 and measuring the gain of the target output port (e.g., channel). Candidate trainings 430 may be displayed to the user and the user may set the input port of amplifier device 210 to the value provided in candidate trainings 430. In other words, the user may set all or a subset of the input ports of the amplifier device 210 to active or inactive based on the candidate training objects 430. The user may measure the signal strength values at the output ports of the amplifier device 210 for the channel in order to calculate the channel gain.

At step 555, the set of labeled training objects 410 may be updated to include additional labeled training objects. At each iteration of step 550, additional training objects may be selectedAnd gain of the target channel when channel loading (e.g., input combining) of the additional training objects y (x) is observedAnd for additional training objectsAfter labeling, the set of labeled training objects 410 may be updated to include additional labeled training objects. This additional labeled training object may be added to the design matrixAfter which a new linear coefficient x is estimatedn×p. After adding the additionally labeled training objects, the new design matrix may beWherein X comprises a set of labeled training objects 410, anxTraining objects comprising additional markers. The edge and conditional variance may be consistent and may be described asIn order to predict the varianceThe maximization, the maximization algorithm may keep the scale unchanged. In thatxAs a fixed norm to avoid scaling In the case of where σ2May be constant. The variance maximization algorithm can be described as Whereinx T x=c2And wherein eminCan be with XTThe smallest eigenvalue of X is associated with the eigenvector. In which the feature space is binary (i.e.) In the case of (2), then the loose approximation may be

Method 500 may continue at step 560, and the updated set of labeled training objects 410 may be used to further train generated ML model 420 and generate an improved generated ML model 420. Although not illustrated in fig. 5, the method 500 may continue from step 540 to step 560 and may maximize variance in view of additional labeled training objects. The method 500 may select whether to return to step 540 based on the counter. For example, after every 10 additional labeled training objects have been added to the set of labeled training objects 410, a further refined generated ML model 420 may be generated (e.g., the coefficients of the generated ML model may be further refined).

FIG. 6 is a flow diagram of a method 600 for selecting features for generating an ML model in accordance with one or more illustrative aspects of the present disclosure. In one or more embodiments, method 600, or one or more steps thereof, may be performed by one or more computing devices or entities. For example, the method 600 may be performed by software for execution by a processing unit of the computing system 100. The software includes computer-executable instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Encoding of software to perform the method 500 is within the scope of one of ordinary skill in the art with respect to this application. Some steps or portions of steps of the method 600 shown in the flowchart may be omitted or changed in order. It should be emphasized that, unless otherwise specified, the method 600 shown in fig. 6 need not be performed in the exact sequence as shown; and as such, various steps may be performed in parallel rather than in sequence.

At step 605, characteristics corresponding to the selected target port or target channel may be determined. Any combination of the primary features and/or the additional features may be determined. The actions performed at step 605 may be similar to those described above at step 510. Data corresponding to the features may be calculated for each training object 410 and may be stored with the training objects 410.

At step 610, a correlation between the gain of the target port or target channel and each characteristic may be calculated. Training object 410 may be used to determine a correlation between the gain of the target port or target channel and each feature. The correlation may be computed as a Pearson correlation and/or any other type of correlation function.

At step 615, the features may be ranked based on the strength of their correlation with the gain of the target port or target channel. At step 620, a list may be created with several highest ranked features. The number of highest ranked features in the list may be predetermined and/or selected. For example, the list may include twenty highest ranked features that have the strongest correlation with the gain of the target port or target channel. The number of highest ranked features may be determined based on a threshold relevance strength. All features that meet the threshold correlation strength may be included in the list.

At step 625, the list may be scanned to determine whether all components of the additional features are included. Additional features may include pairwise interactions, e.g., xixj. For example, if x3x7Included in the feature list, the list may then be scanned to determine x3Whether it exists in the list and/or x7Whether it is present in the list.

If all of the primary effects of the additional features are not included in the list of features, the missing primary effects may be added to the list at step 635. If all of the primary effects of the additional features are included in the list, the list of features may be stored at step 630.

At step 635, all or a portion of the missing primary effects may be added to the feature list. As described above with respect to step 515 of fig. 5, the primary effect may be added to the list of features if the primary effect forms part of an additional feature in the list.

A list of features may be stored at step 630. For example, the list of features may be stored in a database. The list of features may be transmitted to another function or algorithm. The list of features may be sorted, for example, by using the sorting method 700 described below.

Fig. 7 is a flow diagram of a method 700 for ranking features in accordance with one or more illustrative aspects of the present disclosure. In one or more embodiments, method 700, or one or more steps thereof, may be performed by one or more computing devices or entities. For example, method 700 may be performed by software for execution by a processing unit of computing system 100. Encoding of software to perform method 700 is within the scope of one of ordinary skill in the art with respect to this application. The software includes computer-executable code or instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Some steps or portions of steps in the flowcharts may be omitted or changed in order. It should be emphasized that, unless otherwise specified, the method 700 shown in fig. 7 need not be performed in the exact sequence as shown; and as such, various steps may be performed in parallel rather than in sequence.

At step 705, a linear ML model 420 is generated to predict the gain of the selected target channel given the combination of inputs. The ML model 420 may be a linear model that includes the primary effects from a feature list, such as a feature list generated using the method 600. An example of a linear model that includes 40 major effects isWhere at a given channel load (alternatively referred to as input combining),including the estimated predicted gain of the channel ch corresponding to the target channel. Each value of b comprises a coefficient and each value of x comprises a primary effect.

At step 710, the primary effects corresponding to the target channel and the neighboring ports may be placed in the sorted list of features. The characteristics corresponding to the destination port may include characteristics corresponding to input ports associated with channels identical to the destination port. The adjacent ports may include one or more ports adjacent to the input port. Features corresponding to other input ports may be placed in the unordered feature list. The coefficients of the input port of the target port and the coefficients of the adjacent input ports in the linear model may be set to predetermined values. For example, in the example linear model given above, the coefficient bch-1、bchAnd bch+1May be set to a value of '1'. In this example, the other coefficient b may be set to a value of '0'.

An ordered list of features may be created. The dominant effect corresponding to the target port and the dominant effect corresponding to the adjacent input port may be placed at the top of the sorted list of features. For example, xch-1、xchAnd xch+1May be placed in the sorted list of features. Domain knowledge may be used to determine which primary effects are placed in the sorted list of features. Any feature or combination of features may be placed in the sorted list of features.

At step 715, residual values for the linear model may be determined. The residual value may be calculated by subtracting the values of the features in the sorted list of features from the measured gain of the target port. An example of a formula for determining residual values is ych-(bch-1xch-1+bchxch+bch+1xch+1)。

At step 720, the features in the unordered feature list may be determined. The determined features may include features that are most relevant to the residual values determined at step 715. At step 725, the features determined at step 720 may be moved from the unordered feature list to the sorted feature list. The features determined at step 720 may be added as the lowest ranked features in the sorted list of features.

At step 730, a determination may be made as to whether all features have been ranked. If no features remain in the unordered feature list, the ordered feature list may be stored at 740. The sorted list of features may be stored in a database and/or transmitted.

If the feature remains in the unordered feature list at step 730, the next feature that is most relevant to the residual may be determined at step 720.

FIG. 8 is a flow diagram of a method 800 for selecting an ML model generated in accordance with one or more illustrative aspects of the present disclosure. In one or more embodiments, method 800, or one or more steps thereof, may be performed by one or more computing devices or entities. For example, portions of method 800 may be performed by components of computing system 100. The method 800, or one or more steps thereof, may be embodied in computer-executable instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Some steps or portions of steps in the flowcharts may be omitted or changed in order.

At step 805, an ML model 420 may be generated using the set of labeled training objects 410. ML model 420 may thus generate ML model 420. A list of models may be generated, where the list of models includes one or more features, such as a primary feature and an additional feature.

At step 810, performance metrics for the ML model 420 can be computed. The performance metric may be based on the ML model 420 and/or the set of training objects 410. For example, the BIC algorithm may be used to determine the performance metric. In this example, the output of the BIC algorithm may include a performance metric. The performance metric may indicate how well ML model 420 fits training object 410. The ML model 420 and/or the performance metrics may be stored, for example, in a database or list.

At step 815, features from the sorted list of features may be added to a list of models. The added features may include the highest ranked features in the sorted list of features. One or more of the highest ranked features may be added to the model list. The first ranked feature in the feature list may be the feature corresponding to the target port. For example, if a target port associated with channel 40 is selected, the model list may include x at step 81540. Features corresponding to the target port may be removed from the sorted list of features and placed in a model list.

At step 820, the ML model 420 may be trained using the set of training objects 410. The training subject may be augmented to include features from the feature list. The ML model 420 trained at step 820 may be trained using these augmented training subjects, and the model may thus include features from the feature list. For example, if the model list includes feature x2And x7Then ML model 420 may include feature x2And x7

At step 825, performance metrics corresponding to the ML model 420 generated at step 820 may be determined. The performance metric may be calculated in the same or similar manner as the performance metric is calculated at step 810. For example, if a BIC algorithm is used at step 810, then the BIC algorithm may be used at step 825. The ML model 420 and/or the performance metrics may be stored, for example, in a database or list.

At step 830, the sorted list can be checked to determine if the sorted list is empty. If the sorted list is not empty, the method 800 may proceed to step 815 where the highest ranked features from the sorted list may be moved to the model list. The training objects may be augmented to include the highest ranked features. Then at step 820, additional ML models 420 may be generated and trained using the augmented training objects.

If the sorted list is empty at step 830, the method 800 may proceed to step 835. At step 835, the performance metrics computed at steps 810 and 825 for each ML model 420 may be compared. The ML model 420 with the best performance metric may be determined. The ML model 420 with the best performance metric may be the ML model 420 that best fits (i.e., best predicts) the set of training objects 410.

At step 840, the features used to generate the ML model 420 with the best performance metric may be stored. The ML model 420 corresponding to the best performance metric may be stored. Features and/or ML model 420 may be used to generate candidate training objects 430 and/or additional training objects 440. Given the input values used to generate the features of ML model 420, ML model 420 may be used to predict the gain of the target port.

Fig. 9 is a flow diagram of a method 900 for determining additional labeled training subjects based on variance in accordance with one or more illustrative aspects of the present disclosure. In one or more embodiments, method 900, or one or more steps thereof, may be performed by one or more computing systems, such as computing system 100. For example, portions of method 900 may be performed by components of computing system 100. The method 900, or one or more steps thereof, may be embodied in computer-executable instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Some steps or portions of steps in the flowcharts may be omitted or changed in order.

At step 905, a variance model of the generated ML model 420 may be generated. A variance model may be generated based on the type of ML model 420 generated and/or the set of labeled training objects 410. The set of labeled training objects 410 may have been used to train an ML model that produced generated ML model 420.

At step 910, the variance model may be examined to determine whether it is a closed form function. If the variance model is not in closed form, the method 900 may continue at step 915.

At step 915, a maximum variance may be estimated. The variance may include a variance of the predicted gain values of the generated ML model 420. The variance may be calculated using a variance model for one or more channel loads (e.g., input combinations). A set of all possible input combinations of features in the variance model may be generated, and a variance may be calculated for each input combination in the set of all input combinations. The input combinations may be ordered based on the estimated variance of the predicted gain value corresponding to each input combination. One or more input combinations with the highest estimated variance may be stored. For example, the fifty highest ranked input combinations may be selected and/or stored.

If it is determined at step 910 that the variance model is in a closed form, the method 900 may continue at step 920. At step 920, the method 900 may determine whether a known solution exists for the variance model created at step 905. The known solution may provide one or more maxima of the variance model. In other words, the known solution may identify one or more channel loads (e.g., input combinations) with the highest variance.

If there are known solutions, one or more maxima of the variance model may be determined at step 930. Channel loads (e.g., input combinations) corresponding to one or more maximum values may be stored. Multiple channel loads (e.g., input combinations) with the highest variance may be stored. For example, fifty channel loads (e.g., input combinations) with the highest variance may be determined and stored. The candidate training object 430 corresponding to the highest variance may be determined at step 930 using the methods 1000 and 1100 described below and in fig. 10 and 11.

If there is no known solution for the variance model, the maximum variance may be determined at step 925 by iterating through the set of channel loads (e.g., input combinations). The set of channel loads (e.g., input combinations) may include all possible channel loads (e.g., input combinations) for the features used in the variance model. For each channel load (e.g., input combination) in the set of channel loads (e.g., input combinations), the variance may be calculated using a variance model. The channel loads (e.g., input combinations) may be ranked based on the calculated variances, and one or more of the highest ranked channel loads (e.g., input combinations) may be selected and/or stored. Candidate training objects 430 may be determined at step 925 using methods 1200 and 1300 described below and in fig. 12 and 13.

Fig. 10 is a flow diagram of a method 1000 for solving a dissimilarity model and selecting additional training objects from a pool in accordance with one or more illustrative aspects disclosed herein. In one or more embodiments, method 1000, or one or more steps thereof, may be performed by one or more computing devices or entities. For example, portions of method 1000 may be performed by components of computing system 100. The method 1000 or one or more steps thereof may be embodied in computer-executable instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Some steps or portions of steps in the flowcharts may be omitted or changed in order.

Method 1000 may be used to determine additional training subjects 440 based on a variance model. The method 1000 may be used when the variance model is a closed form variance model with a known solution.

At step 1005, a design matrix may be generated based on the set of labeled training objects 410. The design matrix may include all or a portion of the labeled training objects 410. The design matrix may include a portion of each of the labeled training objects 410. The columns of the design matrix may include one or more features, such as features used to generate ML model 420. The rows of the design matrix may include labeled training objects 410. Each row of the design matrix may include one training labeled object 410.

At step 1010, the design matrix may be used to determine the channel load (e.g., input combination) with the largest variance. A variance model, such as a closed form variance model with a known solution, may be determined and/or generated. The variance model may be solved using the design matrix to determine the one or more channel loads (e.g., input combinations) with the highest variance. The channel loading (e.g., input combination) may include a '1' or '1' value for each column of the design matrix, i.e., for each selected feature.

Sign (e) may be usedmin) Determining the maximum variance, wherein eminTo correspond to XTAn eigenvector of the smallest eigenvalue of X, and where X is the design matrix. Feature vector eminMay include the channel load (e.g., input combination) with the largest variance.

At step 1015, a pool of candidate training objects 430 may be searched to obtain training objects that satisfy the channel load (e.g., input combination) determined at step 1010. The pool of candidate training objects 430 may include one or more training objects that have been labeled (e.g., include measured gain values for the channel load (e.g., input combinations) of the candidate training objects 430) but have not been used to further train the generated ML model 420 the channel load (e.g., input combinations) determined at step 1010 may include a subset of the features of the candidate training objects 430. For example, candidate training object 430 may include values for forty different features, while the channel load (e.g., input combination) may include values for five of those features. At step 1015, the pool of candidate training objects 430 may be searched to obtain candidate training objects 430 that match the subset of features in the determined channel load (e.g., input combination). The values of the remaining features that are not in the determined channel load (e.g., input combination) may be any combination of '-1' and '1'.

If an accurate candidate training object 430 matching the channel load (e.g., input combination) is not found at step 1020, the next candidate training object 430 in the pool of candidate training objects 430 with the highest variance may be determined at step 1025. The next channel load (e.g., input combination) with the highest variance may be determined and the unlabeled pool of candidate training objects 430 may be searched to obtain matching candidate training objects 430. If the search is unsuccessful, a combination of inputs with the third highest variance may be determined, and the process may be repeated until a candidate training object 430 is found.

After candidate training object 430 is found, at step 1020 or step 1025, a gain corresponding to candidate training object 430 may be received. A gain corresponding to the channel load (e.g., input combination) of candidate training object 430 may be requested. For example, a database storing gain values for channel loads (e.g., input combinations) may be queried to obtain gains for channel loads (e.g., input combinations) corresponding to candidate training objects 430. The gain may be a measured gain, such as a measured gain of an output port of the amplifier device 210 given a channel load (e.g., input combination) of the candidate training object 430. Candidate training object 430 may be transmitted to a user or a device for measuring the gain of a target port (e.g., channel). Given the channel load (e.g., input combination) of candidate training object 430, the operator or device may then measure the gain of the target output port (e.g., channel).

At step 1035, additional training object 440 corresponding to candidate training object 430 may be an additional tagged training object that is added to the set of tagged training objects 410 to generate an updated set of tagged training objects 410. Candidate training objects 430 may be removed from the pool of candidate training objects 430 and added to the set of labeled training objects 410. Additional training objects 440 may include the channel load (e.g., input combination) and the measured gain of the target output port (e.g., channel) of candidate training object 430. In addition to other labeled training objects in the set of labeled training objects 410, additional labeled training objects may be used to further train the ML model 420 for the target output port (e.g., target channel).

Fig. 11 is a flow diagram of a method 1100 for solving a dissimilarity model and generating additional training objects according to one or more illustrative aspects disclosed herein. In one or more embodiments, method 1100, or one or more steps thereof, may be performed by one or more computing devices or entities. For example, portions of method 1100 may be performed by components of computing device 100. The method 1100, or one or more steps thereof, may be embodied in computer-executable instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Some steps or portions of steps in the flowcharts may be omitted or changed in order.

Method 1100 may be used to determine additional training subjects 440 based on a variance model. The method 1100 may be used when the variance model is a closed form variance model with a known solution.

At step 1105, a design matrix may be generated based on the set of labeled training objects 410. At step 1110, the design matrix may be used to determine the one or more channel loads (or input combinations) with the largest variance. The actions performed at steps 1105 and 1110 may be similar to those described above with respect to steps 1005 and 1010, respectively.

At step 1115, unlabeled candidate training objects 430 may be generated. In contrast to fig. 10, where the unlabeled candidate training objects are selected from the unlabeled pool of unlabeled candidate training objects 430, any possible channel load (e.g., input combinations) are available for the unlabeled candidate training objects 430 generated at step 1115. Unlabeled candidate training objects 430 may be generated by randomly selecting values for the remaining features that are not included in the channel load (e.g., input combination). For example, if the channel load (e.g., input combination) determined at step 1110 has values for seven features, but the generated ML model 420 corresponds to a channel of amplifier 210 of forty channels, then the values for the remaining thirty-three channels may be randomly selected and used in unlabeled candidate training object 430. The values of the remaining features may all be set to '1', may be randomly selected, or may be otherwise determined. The values of the remaining features may be determined by combining unlabeled candidate training objects generated for other target ports (e.g., target channels).

At step 1120, a gain corresponding to the unlabeled candidate training object 430 may be received. The actions performed at step 1120 may be similar to those described above with respect to step 1030. For example, unlabeled candidate training objects 430 may be displayed to a user, and the user may configure amplifier device 210 based on the unlabeled candidate training objects 430 and measure the gain of one or more output ports.

At step 1125, additional training objects 440 corresponding to unlabeled candidate training objects 430 may be labeled and added to the set of labeled training objects 410. Additional training objects 440 may include channel loads (e.g., input combinations) of unlabeled candidate training objects 430 and may be labeled with measured gains of target ports (e.g., target channels) to generate labeled additional training objects. In addition to other labeled training objects in the set of labeled training objects 410, additional labeled training objects may be used to further train the generated ML model 420 for a target port (e.g., a target channel).

Fig. 12 is a flow diagram of a method 1200 for iteratively determining a maximum variance and selecting additional training objects from a pool in accordance with one or more illustrative aspects disclosed herein. In one or more embodiments, method 1200, or one or more steps thereof, may be performed by one or more computing devices or entities. For example, portions of method 1200 may be performed by components of computing device 100. The method 1200, or one or more steps thereof, may be embodied in computer-executable instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Some steps or portions of steps in the flowcharts may be omitted or changed in order.

Method 1200 may be used to determine additional training subjects 440 based on a variance model. The method 1200 may be used when the variance model is a closed form variance model with no known solution.

At step 1205, a design matrix may be created based on the set of labeled training objects 410. The actions performed at step 1205 may be similar to those described above with respect to step 1005.

At step 1210, all possible channel loadings (e.g., input combinations) for the dominant effects in the design matrix may be determined. A set of all possible channel loadings (e.g., input combinations) for the dominant effects in the variance model may be generated. Each channel loading (e.g., input combination) may include a '1' or '-1' value for each of the primary effects in the design matrix.

At step 1215, one or more channel loads (e.g., input combinations) that maximize the variance may be determined. Can determine and/or connectAnd receiving a variance model corresponding to the design matrix. Each channel load (or combination of inputs) may be applied to a variance model, and a variance corresponding to each channel load (e.g., combination of inputs) may be output. The variance of each channel load (e.g., input combination) may be stored. Each channel load (e.g., input combination) may then be sorted based on the variance to determine the one or more channel loads (e.g., input combinations) with the highest variance. The variance can be calculated asx T(XTX)-1 xWhere X is the design matrix and each channel load (e.g., input combination) is X.

At step 1220, the unlabeled pool of unlabeled candidate training objects 430 may be searched to obtain unlabeled candidate training objects that match the channel load (e.g., input combination) determined at step 1215. At step 1225, the method 1200 may determine whether an accurate unlabeled candidate training object 430 is found in the unlabeled pool. If a matching unlabeled candidate training object 430 is not found, the unlabeled candidate training object 430 in the unlabeled pool having the highest variance may be determined at step 1230.

After the unlabeled candidate training objects 430 have been determined at step 1225 or step 1230, the gains of channel loading (e.g., input combinations) corresponding to the unlabeled candidate training objects 430 may be predicted at step 1235 using the generated ML model 420. At step 1240, additional training objects 440 including the channel load (e.g., input combinations) of candidate training objects 430 may be tagged with the measured gain of the target port, and the tagged additional training objects may be added to the set of tagged training objects 410. The actions performed at steps 1220-40 may be similar to the actions described above with respect to steps 1015-35.

Fig. 13 is a flow diagram of a method 1300 for iteratively determining a maximum variance and generating additional training objects in accordance with one or more illustrative aspects of the present disclosure. In one or more embodiments, method 1300, or one or more steps thereof, may be performed by one or more computing devices or entities. For example, the method 1300 may be performed by a routine or subroutine of software executed by a processor of the computing device 100. The method 1300, or one or more steps thereof, may be embodied in computer-executable instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Some steps or portions of steps in the flowcharts may be omitted or changed in order.

Method 1300 may be used to determine additional training objects 440 based on a variance model. The method 1300 can be used when the variance model is a closed form variance model with no known solution.

At step 1305, a design matrix may be created based on the set of labeled training objects 410. The actions performed at step 1305 may be similar to those described above with respect to step 1005. At step 1310, all possible channel loadings (e.g., input combinations) for the dominant effects in the design matrix may be determined. At step 1215, a channel load (e.g., input combination) that maximizes the variance may be determined. The actions performed at steps 1310 and 1315 may be similar to those described above with respect to steps 1210 and 1215, respectively.

At step 1320, candidate training objects 430 may be generated by randomly selecting values for the remaining features that are not included in the channel load (e.g., input combination). In contrast to fig. 12, where the unlabeled candidate training objects are selected from the unlabeled pool of unlabeled candidate training objects 430, any possible channel load (e.g., input combinations) are available for the candidate training objects 430 generated at step 1320. The actions performed at step 1320 may be similar to those described above with respect to step 1115. At step 1325, a gain corresponding to a channel load (e.g., input combination) of the unlabeled candidate training object 430 may be received. The actions performed at step 1325 may be similar to those described above with respect to step 1030. At step 1330, additional training objects 440 corresponding to unlabeled candidate training objects 430 may be labeled and added to the set of labeled training objects 410 to generate an updated set of labeled training objects 410. The actions performed at step 1330 may be similar to those described above with respect to step 1125.

Fig. 14 is a flow diagram of a method 1400 for iteratively determining a maximum empirical variance and selecting additional training objects 440 from a pool of unlabeled candidate training objects according to one or more illustrative aspects disclosed herein. In one or more embodiments, the method 1400 or one or more steps thereof may be performed by one or more computing devices or entities. For example, portions of method 1400 may be performed by components of computing system 100. The method 1400 or one or more of its steps may be embodied in computer-executable instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Some steps or portions of steps in the flowcharts may be omitted or changed in order.

Method 1400 may be used to determine additional training subjects 440 based on the estimated variance. The method 1400 may be used in estimating the variance.

At step 1405, a design matrix may be created based on the set of labeled training objects 410. The actions performed at step 1405 may be similar to those described above with respect to step 1005. At step 1410, all possible channel loadings (e.g., input combinations) for the dominant effects in the design matrix may be determined. The actions performed at step 1410 may be similar to those described above with respect to step 1210.

At step 1415, the channel load (e.g., input combination) with the highest estimated variance may be determined. Each of the possible channel loads (e.g., input combinations) determined at step 1410 may be input to a variance model, and an estimated variance may be determined for each of the possible channel loads (e.g., input combinations). The channel loads (e.g., input combinations) may be ranked based on their estimated variances, and one or more of the highest ranked channel loads (e.g., input combinations) may be selected.

The variance model may include one or more generated ML models 420, such as one or more trees, for the target port (e.g., the target channel). To calculate the estimated variance of the channel load (or input combination), the channel load (e.g., input combination) may be input to each of the generated ML models 420. The generated ML models 420 may each output an estimated or predicted gain for a target port (e.g., a target channel) of the input combination. The variance may be estimated based on these calculated gain values. Channel loading (e.g., input combinations) with a relatively wide range of gain values may have a higher estimated variance than channel loading (or input combinations) where all calculated gain values are very similar.

The empirical variance may be determined using the following formula WhereinIn this formula, B is the quantity of the tree, andis a prediction of tree b for a given channel load (e.g., input combination).

At step 1420, the unlabeled pool of candidate training objects 430 may be searched to obtain unlabeled candidate training objects that match the channel load (e.g., input combination) determined at step 1415. At step 1425, the method 1400 may determine whether an accurate unlabeled candidate training object 430 is found in the unlabeled pool. If a matching unlabeled candidate training object 430 is not found, the unlabeled candidate training object 430 in the unlabeled pool having the highest variance may be determined at step 1430.

After the unlabeled candidate training objects 430 have been determined at step 1425 or step 1430, gains corresponding to the unlabeled candidate training objects 430 may be received at step 1435. At step 1440, additional training objects 440 including channel loads (e.g., input combinations) of unlabeled candidate training objects 430 may be labeled with the measured gains of the target ports to generate additional labeled training objects, and the additional labeled training objects may be added to the set of labeled training objects 410 to produce an updated set of labeled training objects 410. The actions performed at steps 1420-40 may be similar to those described above with respect to steps 1015-35.

Fig. 15 is a flow diagram of a method 1500 for iteratively determining a maximum empirical variance and generating additional training objects in accordance with one or more illustrative aspects of the present disclosure. In one or more embodiments, method 1500, or one or more steps thereof, may be performed by one or more computing systems. For example, portions of method 1500 may be performed by computing system 100. The method 1500, or one or more steps thereof, may be embodied in computer-executable instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Some steps or portions of steps in the flowcharts may be omitted or changed in order.

Method 1500 may be used to determine additional training objects 440 based on the predicted estimated variance of the generated ML model.

At step 1505, a design matrix may be created based on the set of labeled training objects 410. The actions performed at step 1505 may be similar to those described above with respect to step 1005. At step 1510, all possible input combinations of the primary effects in the design matrix may be determined. The actions performed at step 1510 may be similar to those described above with respect to step 1210. At step 1515, a channel load (e.g., input combination) that maximizes the estimated variance may be determined. The actions performed at step 1515 may be similar to those described above with respect to step 1415.

At step 1520, unlabeled candidate training objects 430 may be generated by randomly selecting values for the remaining features that are not included in the channel load (e.g., input combination). In contrast to fig. 14, where the candidate training object is selected from the unlabeled pool of candidate training objects 430, any possible channel load (e.g., input combinations) are available for the unlabeled candidate training objects 430 generated at step 1520. The actions performed at step 1520 may be similar to those described above with respect to step 1115. At step 1525, a gain of the target channel may be received, where the gain of the target channel is determined by measuring the input signal strength and the output signal strength of the target channel. The actions performed at step 1525 may be similar to those described above with respect to step 1030. At step 1530, additional training objects 440 including channel loading (e.g., input combinations) of unlabeled candidate training objects 430 may be labeled with gain values of target channels to generate additional labeled training objects, and the additional labeled training objects may be added to the set of labeled training objects 410. The acts performed at step 1530 may be similar to those described above with respect to step 1125.

Fig. 16 is a flow diagram of a method 1600 for generating a model for each channel of a multi-channel amplifier according to one or more illustrative aspects of the present disclosure. In one or more embodiments, method 1600, or one or more steps thereof, may be performed by one or more computing systems. For example, portions of method 1600 may be performed by a processing unit of computing system 100. Method 1600, or one or more steps thereof, may be embodied in computer-executable instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Some steps or portions of steps in the flowcharts may be omitted or changed in order.

At step 1605, the amplifier 210 may be fabricated. The amplifier 210 may be manufactured in a manufacturing facility (e.g., on a manufacturing line). An initial set of labeled training objects 410 may be collected for amplifier 210. The set of labeled training objects 410 may be collected by inputting various signals to the input ports of the amplifier 210 and measuring the resulting gain on one or more output ports of the amplifier 210. When collecting labeled training objects 410, the resulting gain may be measured for all output ports of amplifier 210.

At step 1610, an ML model 420 for a first target channel of the amplifier 210 may be generated. The generated ML model 420 may model the gain of a single channel of the amplifier 210. The generated ML model 420 may include a linear model, a quadratic model, a tree-based set model, and any other type of ML model 420 such as a neural network, a tree, and the like. The ML model 420 may be generated when the amplifier 210 is at a manufacturing facility, such as when the amplifier 210 is on a production line. The ML model 420 may be generated at a test facility, which may be at the same or a different location as the manufacturing facility. ML model 420 may be generated using step 505 and 536 of method 500 described above.

At step 1615, it may be determined whether an ML model 420 has been generated for each output port (e.g., channel) of the amplifier device 210. In some cases, a list of desired channels may be used, and a determination may be made as to whether an ML model 420 has been generated for each of the output ports corresponding to the desired channels. If there are more output ports (e.g., channels) for which the ML model 420 is to be generated, then the ML model 420 for the next output port (e.g., channel) may be generated at step 1620. This process may be repeated until the ML model 420 has been generated for each output port (e.g., channel) of the amplifier apparatus 210 or each of the desired channels of the amplifier apparatus 210.

Each time the ML model 420 is generated for an output port (e.g., channel) of the same amplifier device 210, the set of labeled training objects 410 may increase in size because additional training objects 440 are labeled and added to the set of labeled training objects 410, thereby updating the set of labeled training objects 410. Increasing the size of the set of labeled training objects 410 may improve the accuracy and/or efficiency of the ML model for each successive output port (e.g., channel). In other words, as more ML models 420 are generated, the amount of time used to generate each ML model 420 may be reduced from the amount of time used to generate a previously generated ML model 420.

At step 1625, each of the generated ML models 420 may be stored, for example, in a database. The generated ML model 420 may be associated with the amplifier device 210. The generated ML model 420 may be stored with the serial number of the amplifier device 210. The ML model 420 may be loaded into a memory location of the amplifier device 210. The generated ML model may be stored on a storage device associated with the amplifier device 210.

Fig. 17 is a flow diagram of a method 1700 for determining additional channels to activate in accordance with one or more illustrative aspects disclosed herein. In one or more embodiments, method 1700, or one or more steps thereof, can be performed by one or more computing systems. For example, portions of method 1700 may be performed by a processing unit of computing system 100. The software includes n computer-executable instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Some steps or portions of steps in the flowcharts may be omitted or changed in order.

At step 1705, a request to activate one or more additional channels and/or deactivate one or more channels of the amplifier device 210 may be received. For example, a technician may install equipment at one location and may request that newly installed equipment be connected to the output ports 225-1-225-40 of the amplifier device 210. In another example, the service may no longer be used and the input and/or output ports corresponding to the channel may be deactivated.

At step 1710, the current channel load of the amplifier device 210 may be determined. The current channel load may indicate which channels and/or ports are active and which channels and/or ports are inactive. For example, the channel load may indicate which of output ports 225-1-225-40 are connected to a cable and/or which of input ports 220-1-220-40 are active. The channel load may be determined by querying a database, which may include network configuration information. The channel load may be determined by a user, such as a technician. The channel load may be determined by querying the amplifier device 210. Channel load may be determined by monitoring network activity.

At step 1715, the generated ML model 420 for the gain of each channel of the amplifier device 210 may be retrieved. The generated ML model 420 may be retrieved from a database. A user, such as a technician, may enter a serial number and/or other identifying information for the amplifier device 210 to retrieve the generated ML model 420.

At step 1720, the generated ML model 420 may be used to determine one or more additional channels to be activated or deactivated. The current channel load determined at step 1710 may be used with one or more of the generated ML models 420 to determine additional channels to activate. The additional channels may be selected to minimize any change in gain of the currently active channel in channel loading. The channel may be selected based on minimizing the difference in signal power on the channel with the existing active signal.

Fig. 18 is a flow diagram of a method 1800 for determining an optical signal-to-noise ratio in accordance with one or more illustrative aspects of the present disclosure. In one or more embodiments, method 1800, or one or more steps thereof, may be performed by one or more computing systems. For example, the method 1800 may be performed by a processing unit of the computing system 100. The method 1800, or one or more steps thereof, may be embodied in computer-executable instructions stored in a computer-readable medium, such as a non-transitory computer-readable medium. Some steps or portions of steps in the flowcharts may be omitted or changed in order.

At step 1805, one or more identifiers of the amplifier 210 and/or amplifier location may be received. The amplifier 210 and amplifier location may correspond to a link, such as a link for data transmission. The link may comprise a fiber optic connection. For each amplifier 210, an identifier of the amplifier 210 and/or a location of the amplifier 210 may be received. The identifier of the amplifier 210 may include a serial number and/or any other identifying information. The location of the amplifier 210 may include one or more distances from other objects on the link, such as distances from other amplifiers 210. The position of the amplifier 210 may be measured, for example, by transmitting data over the link.

At step 1810, an indication of a channel load of a link may be received. The channel load may indicate one or more active channels on the link. At step 1815, the generated ML model for each channel in each amplifier 210 on the link may be retrieved. The actions performed at steps 1810 and 1815 may be similar to those described above with respect to steps 1710 and 1715.

At step 1820, a predicted Signal-to-Noise Ratio (SNR), such as an optical Signal-to-Noise Ratio (OSNR), may be predicted for the link. The OSNR can be predicted using the channel load of each channel of each amplifier 210 on the link and the generated ML model. The OSNR may be determined for one or more channels of the link. The OSNR may be calculated separately for each channel of the link.

Although example embodiments are described above, the various features and steps may be combined, divided, omitted, rearranged, modified, or augmented in any desired manner, depending on the particular result or application. Various elements have been described herein as "a and/or B," which is intended to represent any of the following: "A or B", "A and B", "one or more of A and one or more of B". Various alterations, modifications and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements as are made obvious by this disclosure are intended to be part of this description though not expressly stated herein, and are intended to be within the scope of this disclosure. Accordingly, the foregoing description is by way of example only, and not limiting. This patent is limited only as defined in the following claims and the equivalents thereto.

44页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:使用具有动态控制和优化的列表解码的高效极性码检测

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!