Network threat indicator extraction and response

文档序号:1617000 发布日期:2020-01-10 浏览:12次 中文

阅读说明:本技术 网络威胁指示符提取和响应 (Network threat indicator extraction and response ) 是由 I·D·瑞格 B·R·罗根 于 2019-07-03 设计创作,主要内容包括:本申请涉及网络威胁指示符提取和响应,并公开一种设备,其包括通信接口和处理器。通信接口被配置为接收网络威胁报告。处理器被配置为从网络威胁报告中提取指示符。该指示符被报告为与网络威胁相关联。处理器还被配置为基于指示符确定指示该指示符与恶意活动相关联的可能性的置信度分数。处理器还被配置为基于指示符确定指示恶意活动的潜在严重性的影响分数。处理器还被配置为基于指示符、置信度分数和影响分数来识别要执行的动作。该动作包括阻止对应于指示符的网络流量或监控对应于指示符的网络流量。处理器还配置为启动动作的执行。(A device includes a communication interface and a processor. The communication interface is configured to receive a cyber-threat report. The processor is configured to extract an indicator from the cyber-threat report. The indicator is reported as being associated with a cyber threat. The processor is further configured to determine a confidence score based on the indicator indicating a likelihood that the indicator is associated with malicious activity. The processor is further configured to determine an impact score indicative of a potential severity of the malicious activity based on the indicator. The processor is further configured to identify an action to perform based on the indicator, the confidence score, and the impact score. The actions include blocking network traffic corresponding to the indicator or monitoring network traffic corresponding to the indicator. The processor is further configured to initiate performance of the action.)

1. A method, comprising:

receiving, at a device, a cyber-threat report;

extracting, at the device, an indicator from the cyber-threat report, the indicator being reported as being associated with a cyber-threat;

determining, based on the indicator, a confidence score that indicates a likelihood that the indicator is associated with malicious activity;

determining an impact score indicative of a potential severity of the malicious activity based on the indicator;

identifying an action to perform based on the indicator, the confidence score, and the impact score, wherein the action comprises blocking network traffic corresponding to the indicator or monitoring network traffic corresponding to the indicator; and

initiating execution of the action at the device.

2. The method of claim 1, wherein the indicator comprises an Internet Protocol (IP) address, a virus signature, an email address, an email subject, a domain name, a Uniform Resource Identifier (URI), a Uniform Resource Locator (URL), a file name, a message digest algorithm 5 hash (MD5 hash), a file path, or a combination thereof.

3. The method of claim 1, wherein the confidence score and the impact score are based on one or more attributes associated with the indicator.

4. The method of claim 1, further comprising determining an attribute associated with the indicator based on the cyber-threat report, wherein the attribute comprises an indicator type, a threat type, an attack type, a date seen first, a date seen last, a date reported first, a date reported last, a report source, a specific keyword, a killer chain stage, a home identifier, or a home confidence, and wherein at least one of the confidence score or the impact score is based on the attribute.

5. The method of claim 1, further comprising determining an attribute associated with the indicator, the attribute comprising a reporting amount or a false positive rate, wherein the reporting amount comprises a count indicating reporting that the indicator is associated with the malicious activity, wherein the false positive rate is based on a first number of times the indicator is detected as being associated with non-malicious activity and a second number of times the indicator is detected as being associated with malicious activity, wherein at least one of the confidence score or the impact score is based on the attribute.

6. The method of claim 1, wherein the confidence score is based on one or more attributes associated with the indicator, the one or more attributes including a date seen first, a date seen last, an indicator age, a date reported first, a date reported last, a report source, a source reputation score, a report amount, a confidence of attribution, a particular keyword, or a false positive rate.

7. The method of claim 1, wherein the impact score is based on one or more attributes associated with the indicator, the one or more attributes including an indicator type, a reported volume, a killer chain phase, a threat type, an attack type, a specific keyword, or a home identifier.

8. The method of claim 1, further comprising adding, at the device, the indicator to a location in a response queue, the location based on the confidence score and the impact score.

9. The method of claim 8, further comprising: prior to initiating execution of the action:

generating, at the device, a Graphical User Interface (GUI) based on the response queue, the GUI indicating the location of the indicator in the response queue; and

providing the GUI from the device to a display device.

10. The method of claim 1, further comprising:

generating, at the device, a Graphical User Interface (GUI) indicative of the action; and

providing the GUI from the device to a display device,

wherein the performing of the action is initiated in response to receiving a user input indicating that the action is to be performed.

11. The method of claim 1, wherein initiating the performance of the action comprises scheduling performance of the action, wherein the method further comprises:

generating, at the device, a Graphical User Interface (GUI) indicative of the action;

providing the GUI from the device to a display device; and

in response to receiving a user input indicating that the action is not to be performed, canceling the performance of the action.

12. The method of claim 1, wherein initiating the performance of the action comprises performing the action independently of receiving user input.

13. An apparatus, comprising:

a communication interface configured to receive a cyber-threat report; and

a processor configured to perform the method of any of claims 1-12, comprising:

extracting an indicator from the cyber-threat report, the indicator being reported as being associated with a cyber-threat;

determining, based on the indicator, a confidence score that indicates a likelihood that the indicator is associated with malicious activity;

determining an impact score indicative of a potential severity of the malicious activity based on the indicator;

identifying an action to perform based on the indicator, the confidence score, and the impact score, wherein the action comprises blocking network traffic corresponding to the indicator or monitoring network traffic corresponding to the indicator; and

initiating execution of the action.

14. A computer-readable storage device storing instructions that, when executed by a processor, cause the processor to perform operations comprising:

receiving a network threat report;

extracting an indicator from the cyber-threat report, the indicator being reported as being associated with a cyber-threat;

determining, based on the indicator, a confidence score that indicates a likelihood that the indicator is associated with malicious activity;

determining an impact score indicative of a potential severity of the malicious activity based on the indicator;

identifying an action to perform based on the indicator, the confidence score, and the impact score, wherein the action comprises blocking network traffic corresponding to the indicator or monitoring network traffic corresponding to the indicator; and

initiating execution of the action.

15. The computer-readable storage device of claim 14, wherein the operations further comprise extracting an attribute associated with the indicator from the cyber-threat report, wherein at least one of the confidence score or the impact score is based on the attribute, and wherein the attribute comprises an indicator type, a threat type, an attack type, a registration date, a first seen date, a last seen date, a first reported date, a last reported date, a report source, a specific keyword, a killer chain stage, a home identifier, or a home confidence.

Technical Field

The present disclosure relates generally to cyber threat indicator responses.

Background

Network security events in an organization are generally similar to network security events that have occurred in other organizations. Information about network security events of other organizations may be used to detect and prevent malicious network activities. Such information may be collected from a variety of sources with varying degrees of confidence. For example, some information may be received from a trusted network security source that publishes an indicator of a cyber threat. Other information may be gathered from anonymous user posts on public network security forums. The amount of information to be analyzed may lead to backlogs that delay detection and prevent malicious network activity.

Disclosure of Invention

In a particular embodiment, an apparatus includes a communication interface and a processor. The communication interface is configured to receive a cyber-threat report. The processor is configured to extract an indicator from the cyber-threat report. The indicator is reported as being associated with a cyber threat. The processor is further configured to determine a confidence score based on the indicator indicating a likelihood that the indicator is associated with malicious activity. The processor is further configured to determine an impact score indicative of a potential severity of the malicious activity based on the indicator. The processor is further configured to identify an action to perform based on the indicator, the confidence score, and the impact score. The actions include blocking network traffic corresponding to the indicator or monitoring network traffic corresponding to the indicator. The processor is further configured to initiate performance of the action.

In another particular embodiment, a method includes receiving, at a device, a cyber-threat report. The method also includes extracting, at the device, the indicator from the cyber-threat report. The indicator is reported as being associated with a cyber threat. The method also includes determining a confidence score based on the indicator that indicates a likelihood that the indicator is associated with malicious activity. The method also includes determining an impact score indicative of a potential severity of the malicious activity based on the indicator. The method also includes identifying an action to perform based on the indicator, the confidence score, and the impact score. The actions include blocking network traffic corresponding to the indicator or monitoring network traffic corresponding to the indicator. The method also includes initiating performance of the action.

In another particular embodiment, a computer-readable storage device stores instructions that, when executed by a processor, cause the processor to perform operations comprising receiving a cyber-threat report. The operations also include extracting an indicator from the cyber-threat report. The indicator is reported as being associated with a cyber threat. The operations also include determining, based on the indicator, a confidence score that indicates a likelihood that the indicator is associated with malicious activity. The operations also include determining an impact score indicative of a potential severity of the malicious activity based on the indicator. The operations also include identifying an action to perform based on the indicator, the confidence score, and the impact score. The actions include blocking network traffic corresponding to the indicator or monitoring network traffic corresponding to the indicator. The operations also include initiating performance of the action.

The features, functions, and advantages described herein can be achieved independently in various embodiments or may be combined in yet other embodiments further details of which can be seen with reference to the following description and drawings.

Drawings

FIG. 1 is a block diagram illustrating a system operable to perform cyber-threat indicator extraction and response;

FIG. 2 is a diagram illustrating an example of attributes associated with the indicator of FIG. 1;

FIG. 3 is a diagram illustrating an example calculation of confidence scores;

FIG. 4 is a diagram illustrating an example calculation of an impact score;

FIG. 5 is a diagram illustrating an example of attributes and corresponding actions;

FIG. 6 is a flow diagram illustrating an example of a method of cyber-threat indicator extraction and response; and

fig. 7 is a block diagram depicting a computing environment comprising a computing device configured to support aspects of computer-implemented methods and computer-executable program instructions (or code) according to the present disclosure.

Detailed Description

Embodiments described herein relate to cyber threat indicator extraction and response. For example, a threat report analyzer receives a cyber threat report from a first source. The cyber-threat report includes one or more indicators reported as being associated with the cyber-threat. As an example, the indicator may include an Internet Protocol (IP) address reported as being associated with the cyber threat. The threat report analyzer determines an attribute associated with the indicator. For example, the threat report analyzer extracts at least one attribute from the cyber threat report. To illustrate, the cyber-threat report may include an attribution (attribution) identifier that indicates a reporting agent of the cyber-threat. As another example, the threat report analyzer determines the at least one attribute based on data from a second source (e.g., a trusted source). For example, the threat report analyzer sends a request to a device associated with the second source. The request includes an indicator. The threat report analyzer receives data associated with the indicator from the device. For example, the data indicates whether the second source is also reporting that the indicator is associated with a cyber threat.

The threat report analyzer determines a confidence score and an impact score based on the attributes associated with the indicator. The confidence score indicates a likelihood that the indicator is associated with malicious activity. For example, if the second source also reports that the indicator is associated with a cyber threat, the confidence score is higher. The impact score indicates the potential severity of the malicious activity. For example, the impact score is higher if the home identifier attributes the indicator to a party known to be at risk for a harmful and/or widespread cyber threat.

The threat report analyzer determines a location of an indicator in the response queue based on the confidence score and the impact score, and adds the indicator at the location in the response queue. The response queue indicates the order in which the indicators are to be processed for the corresponding action to be taken (if any). The position of the indicator in the response queue represents the priority of the indicator. Indicators with higher priority are processed earlier. In some examples, the response queue has a particular capacity. When the response queue fills capacity, the threat report analyzer may remove lower priority indicators from the response queue before adding higher priority indicators to the response queue.

In response to determining that the indicator is the next indicator to be processed in the response queue, the threat report analyzer retrieves the indicator from the response queue. The threat report analyzer identifies an action to perform based on the confidence score and the impact score. For example, the action includes monitoring network traffic corresponding to the indicator or blocking network traffic corresponding to the indicator. In a particular example, the threat report analyzer identifies an action to perform in response to retrieving the indicator from the response queue. In another example, the threat report analyzer identifies an action to perform independent of adding an indicator to the response queue. The threat report analyzer initiates performance of the action. In a particular example, the action is performed independently of receiving user input indicating the action to be performed.

Indicators having a higher likelihood of being associated with malicious activity and/or indicators indicating a higher potential severity of malicious activity are processed earlier. Faster searching (or accessing) of higher priority indicators improves computer functionality by enabling the threat report analyzer to prevent or reduce the impact of corresponding malicious activity. Prioritizing the indicators based on the confidence score and the impact score improves the accuracy of the priority calculation compared to subjective determination of indicator priority. The threat report analyzer is capable of filtering internet traffic, which may be customized based on rules associated with computing confidence scores and impact scores for particular attributes of the indicators. Automatically performing the action (e.g., not receiving user input indicating that the action is to be performed) reduces (e.g., eliminates) the average time to respond to malicious activity.

FIG. 1 is a block diagram of a system 100, the system 100 operable to perform cyber-threat indicator extraction and response. The system 100 includes a first device 140 coupled to one or more devices via the network 102. For example, the first device 140 is coupled to the second device 124, the third device 126, the fourth device 128, one or more additional devices, or a combination thereof via the network 102. The first device 140 corresponds to, for example, a computer, a server, a distributed system, or a combination thereof. Network 102 includes a wired network, a wireless network, or both. One or more of the second device 124, the third device 126, or the fourth device 128 includes, for example, a web server, a database, a computer, a server, a distributed system, a mobile device, a communication device, a desktop computer, a laptop computer, a tablet computer, or a combination thereof. First device 140 is coupled to network 102 via communication interface 146.

It should be noted that in the following description, various functions performed by the system 100 of fig. 1 are described as being performed by certain components or modules. However, this division of components and modules is for illustration only. In alternative aspects, functionality described herein as being performed by a particular component or module may be divided among multiple components or modules. Further, in another aspect, two or more components or modules of fig. 1 may be integrated into a single component or module. Each of the components or modules shown in fig. 1 may be implemented using hardware (e.g., a Field Programmable Gate Array (FPGA) device, an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a controller, etc.), software (e.g., instructions executable by a processor), or any combination thereof.

The first device 140 is coupled to the display device 122 via the output interface 148. The first device 140 includes a memory 142. The first device 140 includes an input interface 144. The input interface 144 is coupled to one or more input devices, such as a touch screen, a mouse, a keyboard, a microphone, a camera, or a combination thereof. The first device 140 includes a threat report analyzer 150 (e.g., a processor). Threat report analyzer 150 is used to analyze cyber threat report 101.

During operation, first device 140 receives cyber-threat report 101 from second device 124 via communication interface 146. In a particular aspect, the second device 124 is associated with a first source (e.g., a particular website, a particular organization, a third party, or a combination thereof). By way of example, the cyber-threat report 101 may include a user post on an internet forum discussing a cyber-security issue. As another example, cyber-threat report 101 may include a report of a cyber-threat issued by a cyber-security organization. Threat report analyzer 150 may receive reports from many different sources simultaneously. In particular embodiments, threat report analyzer 150 generates cyber threat reports from various sources using a cyber crawling technique. For example, threat report analyzer 150 generates cyber threat report 101 by extracting data from a web page hosted by second device 124. In particular embodiments, threat report analyzer 150 subscribes to receive cyber threat reports from various sources. For example, threat report analyzer 150 subscribes to a service provided by a first source and receives cyber threat report 101 from second device 124 as part of the subscription. In a particular aspect, the first source includes a third party (e.g., a business entity, a security expert, or both) that monitors cyber threats and issues cyber threat reports (e.g., cyber threat report 101). The cyber threat is caused by one or more perpetrators. First device 140 receives a cyber-threat report (e.g., cyber-threat report 101) generated by a first source from second device 124.

Threat report analyzer 150 extracts one or more indicators from cyber threat report 101. For example, threat report analyzer 150 extracts indicator 103 from cyber threat report 101. In a particular aspect, the cyber-threat report 101 includes text (e.g., natural or unstructured language). In this regard, the threat report analyzer 150 extracts the indicators 103 from the cyber threat report 101 by performing a keyword search, natural language processing, or the like. For example, the threat report analyzer 150 detects a particular keyword (or phrase) in the cyber threat report 101 by performing a keyword search (or natural language processing), and extracts the indicator 103 from the cyber threat report 101 based on detecting the particular keyword (or phrase). In a particular aspect, the cyber-threat report 101 is formatted or structured (e.g., includes key-value pairs). In this regard, threat report analyzer 150 extracts indicators 103 from cyber threat report 101 by parsing cyber threat report 101 based on a corresponding format or structure. For example, the threat report analyzer 150 extracts a particular element (e.g., a value of a particular key-value pair) from the cyber threat report 101 as an indicator 103. Cyber threat report 101 indicates that indicator 103 is reported as being associated with a cyber threat. For example, the organization report indicator 103 is associated with a cyber threat. As another example, the user posts the indicator 103 in an internet security forum. In a particular embodiment, the indicator 103 corresponds to an indicator of compromise (IOC). The IOC includes artifacts (i.e., observable features) that indicate a network threat (e.g., computer intrusion). Indicators 103 include, but are not limited to, Internet Protocol (IP) addresses, virus signatures, email addresses, email subject, domain names, Uniform Resource Identifiers (URIs), Uniform Resource Locators (URLs), file names, message digest algorithm 5(MD5) hashes, file paths, or combinations thereof.

The threat report analyzer 150 determines one or more attributes 105 associated with the indicator 103. Attributes 105 include, for example, an indicator type, a threat type, an attack type, a registration date, a date seen first, a date seen last, a date reported first, a date reported last, a report source, a particular keyword, a killer chain phase (kill chain phase), a home identifier, a home confidence, a report volume, a false positive rate, other attributes, or combinations thereof, as further described with reference to fig. 2.

In a particular aspect, the threat report analyzer 150 extracts at least some of the attributes 105 from the cyber threat report 101. For example, threat report analyzer 150 extracts an indicator type, threat type, attack type, registration date, date seen first, date seen last, date reported first, date reported last, report source, specific keyword, killer chain stage, home identifier, home confidence, or a combination thereof from network threat report 101.

In a particular aspect, the threat report analyzer 150 receives additional data associated with the indicator 103. For example, threat report analyzer 150 sends a first request 161 to third device 126. In a particular aspect, the third device 126 is associated with a second source (e.g., a first trusted source). First request 161 includes indicator 103. The third device 126 sends the first data 165 to the first device 140 in response to the first request 161. The first data 165 is associated with the indicator 103. For example, the first data 165 indicates whether the second source has a report of a cyber threat associated with the indicator 103.

In a particular aspect, the threat report analyzer 150 receives additional data associated with the indicator 103 from a plurality of additional sources. For example, threat report analyzer 150 sends a second request 163 to fourth device 128. In a particular aspect, the fourth device 128 is associated with a third source (e.g., a second trusted source). The fourth device 128 sends the second data 167 to the first device 140 in response to receiving the second request 163 including the indicator 103. The second data 167 is associated with an indicator 103. For example, the first data 165 indicates whether the third source has a report of a cyber threat associated with the indicator 103.

In a particular aspect, the threat report analyzer 150 has access to additional data associated with the indicator 103. For example, the history data 121 is stored in the memory 142. In a particular aspect, the historical data 121 corresponds to a log, such as a system log, a weblog, or both. To illustrate, the history data 121 indicates the number of accesses to the domain name indicated by the indicator 103. As another example, lookup data 129 (e.g., a table) is stored in memory 142. Lookup data 129 includes, for example, configuration settings, default values, user input, or a combination thereof. To illustrate, the lookup data 129 indicates that a particular keyword is associated with a particular score for calculating the confidence score 107, the impact score 109, or both, as further described with reference to fig. 2-4.

In a particular aspect, the threat report analyzer 150 determines at least some of the attributes 105 based on the first data 165, the second data 167, the historical data 121, or a combination thereof. For example, the threat report analyzer 150 determines a report volume, a false positive rate, or both, as further described with reference to fig. 2.

The threat report analyzer 150 determines a confidence score 107 based on the attributes 105, as further described with reference to fig. 3. The confidence score 107 indicates a likelihood that the indicator 103 is associated with malicious activity. For example, the confidence score 107 indicates a higher likelihood that the indicator 103 is associated with malicious activity if the indicator 103 is reported as being associated with a cyber threat by a more trusted source (e.g., an organization that posted the cyber threat indicator) than by an anonymous source (e.g., an anonymous user on a public forum). To illustrate, an anonymous source may incorrectly (or maliciously) report the indicator 103 as being associated with a network threat. If a more trusted source reports indicator 103 as being associated with a cyber threat, indicator 103 is more likely to be associated with potentially malicious activity.

The threat report analyzer 150 determines an impact score 109 based on the attributes 105, as further described with reference to fig. 4. The impact score 109 indicates the potential severity of malicious activity associated with the indicator 103. For example, if the impact of malicious activity associated with indicator 103 is likely to be more damaging, more extensive, or both, then the malicious activity has a higher potential severity. Killing chains (e.g. cyber kill chain

Figure BDA0002117373360000061

) (registered trademark of Lockheed Martin corp., Maryland)) is associated with indicator 103. The particular killing chain stage indicates the possible impact of malicious activity associated with the indicator 103. For example, malicious activity directed to an indicator 103 associated with a first killer chain stage (e.g., spy) may be less disruptive and/or less widely affected than malicious activity directed to an indicator 103 associated with a second killer chain stage (e.g., command and control). The impact score 109 indicates a higher severity of the second killer chain stage (e.g., command and control) as compared to the first killer chain stage (e.g., scout).

The threat report analyzer 150 determines an overall score 111 for the indicator 103 based on the confidence score 107, the impact score 109, or both. In a particular example, the overall score 111 corresponds to a weighted sum of the confidence score 107 and the impact score 109. Total score 111 indicates the first priority of indicator 103.

The threat report analyzer 150 is configured to add indicators to the response queue 113 in order of their overall score. For example, threat report analyzer 150 determines position 117 in response queue 113 based on total score 111. Threat report analyzer 150 adds indicator 103 at location 117 in response queue 113. In a particular aspect, location 117 in response queue 113 is empty, and threat report analyzer 150 adds indicator 103 at location 117. In an alternative aspect, position 117 in response queue 113 is occupied by another indicator having a lower total score than the total score of indicators 103. Threat report analyzer 150 updates the location of the indicator (e.g., increments by 1) at and after location 117 and adds indicator 103 to location 117. In a particular example, response queue 113 has a particular capacity. Threat report analyzer 150, in response to determining that response queue 113 is filled to capacity, removes the second indicator from the (e.g., last position) of response queue 113 before adding indicator 103 to response queue 113 at location 117. Removing the lower priority indicator results in response queue 113 having a lower memory footprint than storing all indicators in memory 142.

Threat report analyzer 150 generates a Graphical User Interface (GUI) 123. The GUI123 indicates one or more of the following: a portion of the cyber threat report 101, an indicator 103, one or more of the attributes 105, a confidence score 107, an impact score 109, an overall score 111, a location 117, or a response queue 113. In particular embodiments, threat report analyzer 150 generates GUI123 in response to adding indicator 103 to response queue 113. In another embodiment, the threat report analyzer 150 generates the GUI123 in response to receiving a user input 125 from a first user 132 (e.g., a network administrator) requesting updated information about the response queue 113. In a particular aspect, the first user 132 provides a user input 125 indicating an update to the data associated with the indicator 103, and the threat report analyzer 150 updates the data. For example, the user input 125 indicates at least one of a location of the update, an attribute of the update, a confidence score 107 of the update, an impact score 109 of the update, or an overall score 111 of the update. The threat report analyzer 150 updates the location 117, the attributes 105, the confidence scores 107, the impact scores 109, and the overall scores 111 to indicate the updated location, the updated attributes, the updated confidence scores 107, the updated impact scores 109, and the updated overall scores 111 in response to receiving the user input 125. In a particular aspect, the threat report analyzer 150 removes the indicator 103 from the response queue 113 in response to receiving the user input 125 indicating that the indicator 103 is to be removed from the response queue 113.

Threat report analyzer 150 is configured to process response queue 113. For example, threat report analyzer 150 determines that indicator 103 is the next indicator to process in response to determining that the next pointer indicates location 117. Threat report analyzer 150, in response to determining that indicator 103 is the next indicator to process, retrieves indicator 103 from response queue 113 and updates the next pointer to indicate a location in response queue 113 that is subsequent to location 117.

Threat report analyzer 150 identifies an action 115 to perform based on indicator 103. For example, the threat report analyzer 150 identifies an action 115 based on the indicator 103, the confidence score 107, the impact score 109, or a combination thereof, as further described with reference to fig. 5. Act 115 may include, for example, blocking network traffic associated with indicator 103, monitoring network traffic associated with indicator 103, or both. For example, act 115 may include blocking network traffic from a first sub-domain of the domain indicated by indicator 103, monitoring network traffic from a second sub-domain of the domain, or both. As another example, act 115 may include blocking a first type of traffic from the IP address indicated by indicator 103, monitoring a second type of traffic from the IP address, or both.

In a particular aspect, threat report analyzer 150 initiates performance of act 115 in response to identifying act 115. In particular embodiments, threat report analyzer 150 performs act 115 in response to identifying act 115. In an alternative embodiment, initiating the performance of act 115 includes scheduling the performance of act 115. For example, threat report analyzer 150 schedules the execution of action 115 by adding action 115 to action queue 119. Threat report analyzer 150 generates (or updates) GUI 123. The GUI123 indicates one or more of the following: a portion of the cyber threat report 101, an indicator 103, one or more of the attributes 105, a confidence score 107, an impact score 109, an overall score 111, an action 115, or an action queue 119. In particular embodiments, threat report analyzer 150 generates (or updates) GUI123 in response to adding action 115 to the action queue. In another embodiment, the threat report analyzer 150 generates (or updates) the GUI123 in response to receiving user input 125 from the first user 132 requesting information about an action added to the action queue 119.

In particular embodiments, threat report analyzer 150 is configured to perform actions in action queue 119 in response to receiving an explicit user request to perform a corresponding action. For example, in response to receiving user input 125 indicating that action 115 is to be performed, performance of action 115 is initiated. To illustrate, threat report analyzer 150 performs act 115 in response to receiving user input 125 indicating that act 115 is to be performed. In a particular example, the first user 132 checks the action 115 added to the action queue 119 and provides a user input 125 indicating approval of the action 115. Alternatively, the threat report analyzer 150 refrains from performing the action 115 in response to determining that the user input 125 (which indicates that the action 115 is to be performed) has not been received or that the user input 125 indicates that the action 115 is not to be performed. Threat report analyzer 150 removes action 115 from action queue 119 in response to receiving user input 125 indicating that action 115 is not to be performed.

In particular embodiments, threat report analyzer 150 is configured to perform actions in action queue 119 unless an explicit user cancellation is received in time. For example, threat report analyzer 150 performs act 115 in response to determining that user input 125 has not been received indicating that act 115 was not performed. To illustrate, threat report analyzer 150 performs act 115 unless first user 132 provides user input 125 indicating that the performance of act 115 is cancelled. If threat report analyzer 150 receives user input 125 indicating that action 115 is not to be performed, threat report analyzer 150 cancels the performance of action 115 by removing action 115 from action queue 119.

In particular embodiments, memory 142 includes a first action queue (e.g., action queue 119) of actions to be performed in response to an explicit user request and a second action queue (e.g., action queue 119) of actions to be performed unless an explicit user cancellation is received. The threat report analyzer 150 determines whether the action 115 is to be added to a first action queue (e.g., action queue 119) or a second action queue (e.g., action queue 119) based on the confidence score 107, the impact score 109, the total score 111, the type of action 115, or a combination thereof. In a particular aspect, the threat report analyzer 150 determines an action score 127 based on the confidence score 107, the impact score 109, the total score 111, or a combination thereof. Alternatively, threat report analyzer 150 may add action 115 to the second action queue (e.g., action queue 119) in response to determining that action score 127 is greater than the first threshold or that action 115 is of the first type (e.g., monitoring network traffic associated with indicator 103), and threat report analyzer 150 may add action 115 to the first action queue (e.g., action queue 119) in response to determining that action score 127 is less than or equal to the first threshold and that action 115 is of the second type (e.g., blocking network traffic associated with indicator 103).

The threat report analyzer 150 performs an action 115 from a first action queue (e.g., action queue 119) in response to receiving the user input 125 indicating that the action 115 is to be performed. Alternatively, the threat report analyzer 150 performs an action 115 from a second action queue (e.g., action queue 119) in response to determining that a user input 125 has not been received indicating that the action 115 was not performed.

Thus, the system 100 enables the indicators 103 to be prioritized based on the confidence score 107 and the impact score 109. When the action score 127 meets a threshold (e.g., indicating a high confidence or high potential severity of malicious activity), or when the action 115 is of a type that may cause little or no disruption to normal business activities (e.g., monitoring traffic), the action 115 may be performed without delay for prior user approval. Thus, with little or no delay associated with waiting for user approval (e.g., at midnight), the detrimental effects from malicious activity may be reduced (e.g., prevented). Computer functionality is improved by enabling actions to be performed that prevent or reduce the impact of corresponding malicious activity, faster searching (or accessing) of higher priority indicators. Removing the lower priority indicator results in response queue 113 having a lower memory footprint than storing all indicators in memory 142.

Referring to fig. 2, a table is shown and generally designated 200. The first column of table 200 includes an example of attribute 105. The second column of table 200 includes attribute values 290 as illustrative values of examples of attributes 105 indicated in the first column. The third column of the table 200 includes an example of a confidence/impact/action value 292 that indicates whether the example of the attribute 105 indicated in the first column is used to determine the confidence score 107, the impact score 109, the confidence score 107, and the impact score 109, or the action 115. It should be understood that attributes 105 may include fewer, additional, or different attributes than those shown in table 200. In some implementations, particular ones of the attributes 105 may be used to determine confidence scores 107, impact scores 109, confidence scores 107, and impact scores 109, or actions 115, that are different than those shown in the table 200.

The attributes 105 include a first seen date 201 (e.g., 3/12/20184: 33), a last seen date 203 (e.g., 5/12/20188:23), a reported volume 205 (e.g., 451), a killer chain phase 207 (e.g., command and control (C2)), an attack type 209 (e.g., malware), a threat type 211 (e.g., malicious IP), one or more description keywords 213 (e.g., "scan"), one or more keyword tags 215 (e.g., "exfile"), a home identifier 217 (e.g., plush rabbit (effy Bunny)), a home confidence 219 (e.g., high), a source count 221 (e.g., 3), a source reputation score 223 (e.g., high), additional source data 225 (e.g., 13/52), a first reported date 227 (e.g., 1/13/20185: 11), a last reported date 229 (e.g., 5/23/201812: 42), actions 231 of one or more manual applications (e.g., Block-Proxy), indicator type 233 (e.g., IPv4 address), indicator creation date 235 (e.g., 8/15/2018), inside clicks (hits)237 (e.g., 500), last inside click date 239 (e.g., 4/12/2001), "targeted" 240 (e.g., yes), registration date 242 (e.g., 1/16/2017), false positive rate 244 (e.g., 50%), additional attributes, or combinations thereof.

In a particular example, the threat report analyzer 150 of fig. 1 extracts, from the cyber threat report 101, a first seen date 201 (e.g., 3/12/20184: 33), a last seen date 203 (e.g., 5/12/20188:23), a report volume 205 (e.g., 451), a killer chain phase 207 (e.g., C2), an attack type 209 (e.g., malware), a threat type 211 (e.g., malicious IP), a description keyword 213 (e.g., "scan"), a keyword tag 215 (e.g., "exfil"), a home identifier 217 (e.g., a plush rabbit (effy Bunny)), a home confidence 219 (e.g., high), a source count 221 (e.g., 3), a source reputation score 223 (e.g., high), a first reported date 227 (e.g., 1/13/20185: 11), a last reported date 229 (e.g., 5/23/201812: 42), a source count 221 (e.g., 3), a source reputation score 223, An indicator type 233 (e.g., an IPv4 address), "targeted" 240 (e.g., yes), a registration date 242 (e.g., 1/16/2017), or a combination thereof. In a particular example, the threat report analyzer 150 determines additional source data 225 (e.g., 13/52), manually applied actions 231 (e.g., block-proxy), indicator creation date 235 (e.g., 8/15/2018), internal click volume 237 (e.g., 500), last internal click date 239 (e.g., 4/12/2001), false positive rate 244 (e.g., 50%), or a combination thereof, based on the cyber threat report 101, the historical data 121, the first data 165, the second data 167, or a combination thereof, as described herein.

Threat report analyzer 150 may determine a threat score based on a first seen date 201 (e.g., 3/12/20184: 33), a last seen date 203 (e.g., 5/12/20188:23), a report volume 205 (e.g., 451), a description keyword 213 (e.g., "scan"), a keyword tag 215 (e.g., "exfil"), a home confidence 219 (e.g., high), a source count 221 (e.g., 3), a source reputation score 223 (e.g., high), additional source data 225 (e.g., 13/52), a first reported date 227 (e.g., 1/13/20185: 11), a last reported date 229 (e.g., 5/23/201812: 42), a manually applied action 231 (e.g., block-agent), an indicator creation date 235 (e.g., 8/15/2018), a registration date 242 (e.g., 1/16/2017), a false positive rate 244 (e.g., 50%), or a combination thereof, to determine the confidence score 107, as described herein. In a particular aspect, the confidence score 107 corresponds to a weighted sum of scores for various attributes of the attributes 105. For example, threat report analyzer 150 assigns a first weight to a first seen date 201, a second weight to a last seen date 203, a first score to the first seen date 201 based on a value of the first seen date 201 (e.g., 3/12/20184: 33), a second score to the last seen date 203 based on a value of the last seen date 203 (e.g., 5/12/20188:23), and a confidence score 107 is determined based on a weighted sum of the first score and the second score (e.g., confidence score 107 is first weight + second weight second score).

Threat report analyzer 150 may determine impact scores 109 based on the report volume 205 (e.g., 451), the killer chain phase 207 (e.g., C2), the attack type 209 (e.g., malware), the threat type 211 (e.g., malicious IP), the description keyword 213 (e.g., "scan"), the keyword tag 215 (e.g., "exfil"), the home identifier 217 (e.g., plush Bunny), additional source data 225 (e.g., 13/52), the indicator type 233 (e.g., IPv4 address), "targeted to" 240 (e.g., yes), or a combination thereof, as described herein. In a particular aspect, the impact score 109 corresponds to a weighted sum of scores for various attributes of the attributes 105.

The first seen date 201 (e.g., 3/12/20184: 33) indicates the date (e.g., timestamp) on which the report indicated that the indicator 103 was first seen (or detected). For example, the cyber-threat report 101 is based on a plurality of reports, and a first report (e.g., a user post on a public forum) having an earliest seen date among the plurality of reports indicates that the indicator 103 was detected on the earliest seen date 201 (e.g., 3/12/20184: 33). In a particular example, the confidence score 107 is lower for a date 201 seen first earlier than a threshold first seen date. For example, if indicator 103 was first seen two years ago, indicator 103 is less likely to be associated with potentially malicious activity.

The last seen date 203 (e.g., 5/12/20188:23) indicates the date (e.g., timestamp) on which the report indicated that the indicator 103 was last seen (or detected). For example, a second report (e.g., a network security publication) having a most recently seen date in the plurality of reports indicates that indicator 103 was detected on the last seen date 203 (e.g., 5/12/20188: 23). In a particular example, the confidence score 107 is lower for a date of last view 203 that is earlier than a date of last view of the threshold. For example, if indicator 103 was last seen a year ago, indicator 103 is less likely to be associated with potentially malicious activity.

Registration date 242 (e.g., 1/16/2017) indicates the date (e.g., timestamp) on which report indicator 103 was registered by the registration authority. For example, the cyber-threat report 101 is a report registered by a registrar (e.g., a domain name registrar) at a registration date 242 (e.g., 1/16/2017) based on an indicator 103 (e.g., a domain name). In a particular example, the confidence score 107 is lower for registration dates 242 that are earlier than a threshold registration date. For example, if the indicator 103 was registered two years ago, the indicator 103 is less likely to be associated with potentially malicious activity.

Date of first report 227 (e.g., 1/13/20185: 11) indicates the date (e.g., timestamp) of the earliest report associated with indicator 103. For example, among the plurality of reports associated with the indicator 103, a first report (e.g., a user post on a public forum) has an earliest date (e.g., a date of a user post on a public forum). In a particular example, the confidence score 107 is lower for a date 227 of the first report being earlier than a threshold first report date. For example, if indicator 103 was first reported two years ago, indicator 103 is less likely to be associated with potentially malicious activity.

The date of the last report 229 (e.g., 5/23/201812: 42) indicates the date (e.g., timestamp) of the most recent report associated with indicator 103. For example, among the plurality of reports associated with the indicator 103, a second report (e.g., a network security publication) has a date of the most recent report (e.g., a date of the publication). In a particular example, the confidence score 107 is lower for a date 229 of the last report being earlier than a threshold date of the last report. For example, if indicator 103 was last reported a year ago, indicator 103 is less likely to be associated with potentially malicious activity.

In a particular aspect, a report (e.g., a user post on a public forum) has a report date (e.g., date of first report 227 or date of last report 229) on which the report (e.g., post) was posted. The report (e.g., user post) may indicate a date of view (e.g., first date of view 201 or last date of view 203) on which the indicator 103 was reportedly detected (e.g., the user indicated in the user post that the network traffic log indicated that the indicator 103 was detected on the date of view). The date seen is less than or equal to the date of the report.

Report volume 205 (e.g., 451) indicates a count of reports indicating that indicator 103 is associated with malicious activity. For example, cyber threat report 101 is based on multiple reports from multiple sources. To illustrate, cyber threat report 101 indicates that a first particular source receives a first number of reports (e.g., 51) indicating that indicator 103 is associated with malicious activity, and a second particular source receives a second number of reports (e.g., 400) indicating that indicator 103 is associated with malicious activity. Threat report analyzer 150 determines a report volume 205 (e.g., 451) based on the first number of reports and the second number of reports (e.g., report volume 205 ═ the first number of reports + the second number of reports). In a particular aspect, the threat report analyzer 150 derives a report amount 205 (e.g., 2) based on the cyber threat report 101, the first data 165, the second data 167, or a combination thereof. For example, threat report analyzer 150 determines a first number (e.g., 1) of cyber threat reports 101 corresponding to an indicator 103 from a first source (e.g., second device 124) associated with malicious activity. Threat report analyzer 150 determines a second quantity (e.g., 450) corresponding to first data 165 indicating that a second source (e.g., third device 126) received a second quantity of reports from various sources indicating that indicator 103 is associated with malicious activity. Threat report analyzer 150 determines a reported quantity 205 (e.g., 451) based on the first quantity and the second quantity (e.g., reported quantity 205 ═ first quantity + second quantity). In a particular example, the confidence score 107 is higher for a reported quantity 205 that is above a reported quantity confidence threshold. For example, if many reports indicate that the indicator 103 is associated with malicious activity, the indicator 103 is more likely to be associated with potentially malicious activity. In a particular example, the impact score 109 is higher for a reported volume 205 that is above a reported volume impact threshold. For example, if many reports indicate that the indicator 103 is detected as being associated with malicious activity, the potentially malicious activity associated with the indicator 103 may have a more serious impact.

The false positive rate 244 (e.g., 33%) is based on the number of times the indicator 103 is detected (or reported) as being associated with non-malicious (or benign) activity and the number of times the indicator 103 is detected (or reported) as being associated with malicious activity. For example, cyber threat report 101 indicates that indicator 103 is reportedly associated with malicious activity. Threat report analyzer 150 determines, based on historical data 121, cyber threat report 101, or both, a first number of times (e.g., 1) that indicator 103 has been reported (or detected) as being associated with non-malicious activity and a second number of times (e.g., 2) that indicator 103 has been reported (or detected) as being associated with malicious activity. The threat report analyzer 150 determines a false positive rate 244 (e.g., 33%) based on the first number and the second number (e.g., false positive rate 244 ═ first number/(first number + second number)). In a particular example, the confidence score 107 is lower for a false positive rate 244 above a false positive rate threshold. For example, if indicator 103 is reported (or detected) more frequently as being associated with non-malicious activity, indicator 103 is less likely to be associated with potentially malicious activity.

Description key 213 (e.g., "scan") indicates a particular key detected in the description of the plurality of reports associated with indicator 103. The keyword tag 215 (e.g., "exfile") indicates a particular keyword detected in tags associated with multiple reports. In particular embodiments, for indicator 103, confidence score 107 is higher if lookup data 129 indicates that descriptive keywords 213 (e.g., "scan"), keyword tags 215 (e.g., "exfile"), or a combination thereof have been previously associated with malicious activity. To illustrate, if the keyword tag 215 (e.g., "exfile") indicates a particular activity (e.g., an exudation (exfiltration) or extraction), the confidence score 107 is higher.

The kill chain stage 207 (e.g., C2) indicates a kill chain (e.g., cyber kill chain (cyber kill)

Figure BDA0002117373360000141

) Stage (stage) in the report associated with indicator 103. The killer chain includes multiple stages, such as reconnaissance (e.g., detecting vulnerabilities), weaponization (e.g., building deliverable payloads), delivery (e.g., sending payloads, such as malicious links), exploitation (e.g., executing code at the target's computer), installation (e.g., installing malware on the target asset), C2 (e.g., creating a channel for a remote control system), and action (e.g., remotely executing malicious actions). In a particular example, the impact score 109 is higher for the killer chain stage 207 associated with a higher order (or stage) in the killer chain. To illustrate, if the indicator 103 is reportedly associated with a particular stage in the killing chain (e.g., C2), the potentially malicious activity associated with the indicator 103 may have a more severe impactAnd (6) sounding.

The attribution identifier 217 (e.g., a plush rabbit (flu Bunny)) represents a wrongdoer reportedly associated with the indicator 103. In a particular example, the impact score 109 is higher for a wrongdoer whose home identifier 217 indicates that the wrongdoer is associated with malicious activity that has a more severe impact (e.g., is more disruptive, is more extensive, or both). To illustrate, if the reported indicator 103 is associated with a particular home identifier that indicates a perpetrator who has previously engaged in malicious activity that has a more serious impact, the potential malicious activity associated with the indicator 103 may have a more serious impact.

Attack type 209 (e.g., malware) indicates the type of network attack reportedly associated with indicator 103. Threat type 211 (e.g., malicious IP) indicates the type of cyber threat reportedly associated with indicator 103. The indicator type 233 (e.g., IPv4 address) indicates the type of the indicator 103. For example, indicator type 233 may include an IP address, a virus signature, an email address, an email subject, a domain name, a URI, a URL, a file name, an MD5 hash, a file path, or a combination thereof. In a particular example, the impact score 109 is higher for an attack type 209, a threat type 211, an indicator type 233, or a combination thereof associated with malicious activity that has a more severe impact (e.g., is more destructive, is more extensive, or both). To illustrate, if reporting indicator 103 is reportedly associated with a particular attack type, a particular threat type, a particular indicator type, or a combination thereof that has previously resulted in malicious activity having a more serious impact, the potentially malicious activity associated with indicator 103 may have a more serious impact.

In particular embodiments, historical data 121 indicates that malicious activity associated with killer chain phase 207 (e.g., C2), attack type 209 (e.g., malware), threat type 211 (e.g., malicious IP), description key 213 (e.g., "scan"), key tag 215 (e.g., "exfile"), indicator type 233, or a combination thereof, has been previously detected and has a corresponding impact severity. In this embodiment, threat report analyzer 150 determines an impact score 109 based on the impact severity.

The home confidence 219 (e.g., high) indicates a reported likelihood that the indicator 103 is associated with a wrongdoer indicated by the home identifier 217. In a particular example, for indicator 103, if the attribution confidence 219 is high, the confidence score 107 is higher.

Source count 221 (e.g., 3) indicates a count of sources that have provided at least one report associated with indicator 103. For example, second device 124 (or the first source) generates cyber-threat report 101 indicating that indicator 103 is reportedly associated with malicious activity. As another example, the third device 126 (or second source) and the fourth device 128 (or third source) generate first data 165 and second data 167, respectively, indicating that the indicator 103 is associated with malicious activity. The threat report analyzer 150 determines a source count 221 (e.g., 3) based on a count of sources (e.g., the first source, the second source, and the third source) from which at least one report is received indicating that the indicator 103 is associated with malicious activity. In a particular example, for indicator 103, if source count 221 (e.g., 3) is high, confidence score 107 is high.

The source reputation score 223 (e.g., high or 10) indicates a level of trustworthiness associated with the source. In particular aspects, the source reputation score 223 indicates a level of trustworthiness associated with a source of the plurality of reports, and the cyber threat report 101 indicates the source reputation score 223. In another aspect, the source reputation score 223 indicates a trust level associated with a particular source, such as a first source (e.g., the second device 124) of the cyber threat report 101. In this regard, threat report analyzer 150 retrieves source reputation scores 223 from memory 142. For example, historical data 121 indicates a source reputation score 223 and threat report analyzer 150 updates source reputation score 223 to the value indicated by historical data 121. In another example, the source reputation score 223 is based on lookup data 129 (e.g., configuration settings, default data, user input 125, or a combination thereof). For example, the lookup data 129 indicates a source reputation score 223 of the first source (e.g., the second device 124). In a particular example, for indicator 103, if the source reputation score 223 (e.g., high) is higher, the confidence score 107 is higher.

"aimed at" 240 (e.g., yes) indicates whether the indicator 103 is reportedly associated with the target cyber threat. For example, the indicator 103 may be associated with a cyber threat directed to a particular organization, a particular person, or both. In a particular example, for indicator 103, the impact score 109 is higher if "targeted" 240 (e.g., yes) indicates that indicator 103 is reportedly associated with the target cyber threat. In a particular example, for indicator 103, the impact score 109 (e.g., high) is higher if "targeted" 240 indicates that indicator 103 is associated with a target network threat for a large or sensitive target (e.g., a hospital, school, airport, power grid, government department, financial institution, or government official). In this example, for indicator 103, if "targeted" 240 indicates that indicator 103 is reportedly associated with a targeted cyber threat for small or fuzzy targets, then the impact score 109 (e.g., medium) is low.

In certain examples, the threat report analyzer 150 generates additional source data 225 (e.g., 13/52). For example, threat report analyzer 150 sends a request to an additional source for information about indicator 103. To illustrate, the threat report analyzer 150 sends a first request 161 to the third device 126 (e.g., the second source), sends a second request 163 to the fourth device 128 (e.g., the third source), or both. First request 161, second request 163, or both, include indicator 103. Threat report analyzer 150 receives data from additional sources indicating whether indicators 103 have been detected as being reportedly associated with a cyber threat. For example, the threat report analyzer 150 receives the first data 165 from the third device 126, the second data 167 from the fourth device 128, or both. The threat report analyzer 150 generates (or updates) additional source data 225 (e.g., 13/52) indicating a count of additional sources from which data was received indicating that the indicator 103 has been reported as associated with the cyber threat (e.g., the additional source data 225 ═ a count of sources from/from which data was received indicating that the indicator 103 has been reported as associated with the cyber threat). In a particular example, for indicator 103, confidence score 107, impact score 109, or both are higher if additional source data 225 (e.g., 13/52) indicates that a higher count of source indicators 103 are reportedly associated with a cyber threat.

An action 231 (e.g., block-agent) of the manual application indicates an action (e.g., action 115) corresponding to the indicator 103 that has been initiated (or recommended) based on the user request. For example, the threat report analyzer 150 receives the cyber threat report 101, generates a GUI123 including the indicator 103, provides the GUI123 to the display device 122, and receives a user input 125 requesting (or recommending) a first action (e.g., block-agent) associated with the indicator 103. The threat report analyzer 150 determines at least some of the attributes 105 after receiving the user input 125 requesting the first action. For example, the threat report analyzer 150 determines that the action 231 of the manual application comprises a first action (e.g., block-agent) in response to receiving the user input 125. In a particular example, the cyber-threat report 101 indicates that the first source (e.g., the second device 124 or a user of the second device 124) has recommended a second action (e.g., action 115) corresponding to the indicator 103. The act 231 of manually applying includes a first act, a second act, or a combination thereof. In a particular aspect, the threat report analyzer 150 initiates performance of an action 231 (e.g., block-agent) of a manual application. Threat report analyzer 150 determines (or updates) confidence score 107, impact score 109, total score 111, or a combination thereof after determining manually applied action 231. In a particular example, for the indicator 103, the confidence score 107 is higher if the manually applied action 231 includes at least one action corresponding to the indicator 103, a particular action (e.g., block-proxy) corresponding to the indicator 103, or both.

Indicator creation date 235 (e.g., 8/15/2018) indicates the date on which indicator 103 was detected by the first source (e.g., second device 124). For example, indicator creation date 235 indicates a date on which the first source (e.g., second device 124) received (or detected) a report (e.g., a user post on a public forum) indicating that indicator 103 is associated with malicious activity. In another example, the indicator creation date 235 corresponds to a date (e.g., a creation date or an update date) on which the cyber threat report 101 was authored. In a particular example, the confidence score 107 is higher if the indicator creation date 235 is more recent. To illustrate, the threat report analyzer 150 determines an indicator age based on the indicator creation date 235 at a first time (e.g., the indicator age is the first time — the indicator creation date 235) and updates the confidence score 107 based on the indicator age. For higher indicator ages, the confidence score 107 is lower. In a particular aspect, the threat report analyzer 150 updates (e.g., at particular time intervals) the confidence scores 107, the impact scores 109, the overall scores 111, the locations 117, or a combination thereof, of the indicators 103 stored in the response queue 113. Thus, the longer the indicator 103 is stored in the response queue 113, the higher the priority the indicator 103 may lose because the higher priority indicators (e.g., indicator 103) are processed earlier and more indicators are added to the response queue 113.

The number of internal clicks 237 (e.g., 500) represents the number of times the indicator 103 is detected in the network traffic. For example, the historical data 121 includes a network log, a system log, or both that tracks network traffic in a particular network portion of the network 102. The particular network portion is considered internal to the organization associated with the first device 140. Last inside click date 239 (e.g., 4/12/2001) indicates the last date indicator 103 was detected in a particular network portion. In a particular aspect, threat report analyzer 150 determines an action 115 based on at least some of attributes 105, as further described with reference to fig. 5. For example, threat report analyzer 150 determines potential commercial impact of various actions (e.g., blocking all network traffic associated with indicator 103 or blocking some network traffic associated with indicator 103). To illustrate, if the inner click volume 237 (e.g., 500) is high, the last inner click date 239 (e.g., 4/12/2001) is recent, or both, the potential business impact is high.

Threat report analyzer 150 may refrain from selecting the action as action 115 in response to determining that the potential commercial impact of the action is greater than the threshold commercial impact. In a particular aspect, threat report analyzer 150 selects actions 115 independent of potential business impact and adds actions 115 to a particular action queue of a plurality of action queues based on potential business impact. For example, threat report analyzer 150 adds action 115 to a first action queue (e.g., action queue 119) of actions performed in response to an explicit user request in response to determining that the potential business impact is greater than the impact threshold. Alternatively, threat report analyzer 150 adds action 115 to a second action queue (e.g., action queue 119) of actions that are performed unless an explicit user cancellation is received in response to determining that the potential business impact is less than or equal to the impact threshold.

Thus, threat report analyzer 150 determines attributes 105 based on cyber threat report 101, first data 165, second data 167, historical data 121, or a combination thereof. The attributes 105 enable the threat report analyzer 150 to determine a priority (e.g., the total score 111) of the indicators 103, as described with reference to FIG. 1.

Fig. 3 includes an illustration 300 of an example calculation of the confidence score 107. The illustration 300 includes a table 302. The first column of table 302 indicates an example of attribute 105. The second column of table 302 includes attribute value 390 as an illustrative value of an example of attribute 105 indicated in the first column. Attributes 105 include a confidence of attribution 219, a source count 221, an indicator type 233, an indicator creation date 235, and a second source click volume 301. The second source click volume 301 indicates the number of times the indicator 103 is detected as reported by a second source (e.g., a trusted source). For example, the additional source data 225 of FIG. 2 includes a second source click volume 301. The second source click volume 301 (e.g., none) indicates the number of times the indicator 103 is indicated as being associated with a cyber threat (e.g., 0) as reported in the first data 165.

The threat report analyzer 150 determines a score 392 of the attribute 105. For example, the threat report analyzer 150 determines a first score (e.g., 10), a second score (e.g., 10), a third score (e.g., 1), a fourth score (e.g., 2), and a fifth score (e.g., 0) in response to determining that the attribution confidence 219 has a first value (e.g., high), the source count 221 has a second value (e.g., 3), the indicator type 233 has a third value (e.g., IPv4 address), the indicator creation date 235 has a fourth value (e.g., 8/15/2014), and the second source click volume 301 has a fifth value (e.g., none), respectively. In a particular aspect, the threat report analyzer 150 determines the score 392 based on the look-up data 129 of fig. 1 (e.g., user input, configuration settings, default values, or a combination thereof). For example, the lookup data 129 indicates a first score for a particular attribute (e.g., the keyword tag 215) having a particular value (e.g., "exfil"). The threat report analyzer 150 determines a confidence score 107 based on the first score of the particular attribute. For example, the threat report analyzer 150 determines that the confidence of ownership 219 has a first score (e.g., 10) in response to determining that the confidence of ownership 219 has a first value (e.g., high) and the look-up data 129 indicates that a first score (e.g., 10) is assigned to the confidence of ownership 219 having the first value (e.g., high). The third column of table 302 includes illustrative values for the scores 392 for the examples of attributes 105 indicated in the first column.

Threat report analyzer 150 determines weights 394 for attributes 105. For example, the threat report analyzer 150 assigns a first weight (e.g., 20%), a second weight (e.g., 20%), a third weight (e.g., 10%), a fourth weight (e.g., 40%), and a fifth weight (e.g., 10%), to the home confidence 219, the source count 221, the indicator type 233, the indicator creation date 235, and the second source click volume 301, respectively. In a particular aspect, the threat report analyzer 150 determines the weights 394 based on the lookup data 129 indicating that the attribution confidence 219, the source count 221, the indicator type 233, the indicator creation date 235, and the second source click volume 301 are to be assigned a first weight (e.g., 20%), a second weight (e.g., 20%), a third weight (e.g., 10%), a fourth weight (e.g., 40%), and a fifth weight (e.g., 10%), respectively. The fourth column of table 302 includes illustrative values for weights 394 for the examples of attributes 105 indicated in the first column. Thus, the calculation of the confidence score 107 may be tailored for a particular attribute by specifying (e.g., in the lookup data 129) a particular weight for the attribute and by specifying (e.g., in the lookup data 129) a particular score for a particular value of the attribute.

Threat report analyzer 150 determines a weighted score 396 for attribute 105 based on score 392 and weight 394. For example, the threat report analyzer 150 assigns a first weighted score (e.g., a first score x first weight), a second weighted score (e.g., a second score x second weight), a third weighted score (e.g., a third score x third weight), a fourth weighted score (e.g., a fourth score x fourth weight), and a fifth weighted score (e.g., a fifth score x fifth weight) to the attribution confidence 219, the source count 221, the indicator type 233, the indicator creation date 235, and the second source click volume 301, respectively. The fifth column of table 302 includes illustrative values for the example weighted scores 396 for the attributes 105 indicated in the first column.

The threat report analyzer 150 determines a confidence score 107 (e.g., 4.9/10) for the attribute 105 based on the weighted score 396. For example, the threat report analyzer 150 determines the confidence score 107 (e.g., 4.9/10) based on a summation of the weighted scores 396 assigned to the attribution confidence 219, the source count 221, the indicator type 233, the indicator creation date 235, and the second source click amount 301 (e.g., the confidence score 107 ═ first weighted score + second weighted score + third weighted score + fourth weighted score + fifth weighted score).

The diagram 300 includes a table 304. Table 304 indicates illustrative values (e.g., 4.9/10) of confidence scores 107 corresponding to the examples of attributes 105 indicated in table 302. Threat report analyzer 150 assigns rating 398 to confidence score 107. The range 380 of confidence scores 107 corresponds to various ratings. In a particular aspect, the lookup data 129 indicates a range 380. The diagram 300 includes a table 306 indicating illustrative values for the range 380. The table 306 indicates that the first rating (e.g., unknown), the second rating (e.g., low), the third rating (e.g., medium), and the fourth rating (e.g., high) correspond to a first range (e.g., 0-2.9), a second range (e.g., 3.0-5.9), a third range (e.g., 6.0-7.9), and a fourth range (e.g., 8.0-10), respectively. The threat report analyzer 150 determines that the confidence score 107 corresponds to a second rating (e.g., low) in response to determining that the second range (e.g., 3.0-5.9) includes the confidence score 107 (e.g., 4.9). The threat report analyzer 150 thus determines a confidence score 107 based on at least some of the attributes 105.

Fig. 4 includes an illustration 400 of an example calculation of an impact score 109. The illustration 400 includes a table 402. The first column of table 402 indicates an example of attribute 105. The second column of table 402 includes attribute values 490 as illustrative values of examples of attributes 105 indicated in the first column. Attributes 105 include attack type 209, threat type 211, and home identifier 217.

The threat report analyzer 150 determines an impact score 109 based on the weighted score 496. First, the threat report analyzer 150 determines a score 492 of the attribute 105. For example, threat report analyzer 150 determines a first score (e.g., 7), a second score (e.g., 7), and a third score (e.g., 8) in response to determining that attack type 209 has a first value (e.g., malware), threat type 211 has a second value (e.g., malicious IP), and home identifier 217 has a third value (e.g., plush rabbit (flu Bunny)), respectively. In a particular aspect, the threat report analyzer 150 determines a score 492 based on the look-up data 129 of FIG. 1. For example, the lookup data 129 indicates a second score for a particular attribute (e.g., the keyword tag 215) having a particular value (e.g., "exfil"). The threat report analyzer 150 determines an impact score 109 based on the second score for the particular attribute. The second score determining the impact score 109 may be the same as or different from the first score (indicated by the look-up data 129) determining the confidence score 107. In a particular example, threat report analyzer 150 determines that attack type 209 has a first score (e.g., 7) in response to determining that attack type 209 has a first value (e.g., malware) and lookup data 129 indicates that a first score (e.g., 7) is assigned to attack type 209 having the first value (e.g., malware). The third column of table 402 includes illustrative values for the example scores 492 of the attributes 105 indicated in the first column.

Threat report analyzer 150 determines a weight 494 of attribute 105. For example, threat report analyzer 150 assigns a first weight (e.g., 30%), a second weight (e.g., 30%), and a third weight (e.g., 40%) to attack type 209, threat type 211, and home identifier 217, respectively. In a particular aspect, threat report analyzer 150 determines weights 494 based on lookup data 129 indicating that attack type 209, threat type 211, and home identifier 217 are to be assigned a first weight (e.g., 30%), a second weight (e.g., 30%), and a third weight (e.g., 40%). The fourth column of table 402 includes illustrative values of weights 494 for the examples of attributes 105 indicated in the first column. Thus, the calculation of the impact score 109 may be customized for a particular attribute by specifying (e.g., in the look-up data 129) a particular weight for the attribute and by specifying (e.g., in the look-up data 129) a particular score for a particular value of the attribute.

Threat report analyzer 150 determines a weighted score 496 for attribute 105 based on score 492 and weight 494. For example, threat reporting analyzer 150 assigns a first weighted score (e.g., a first score x first weight), a second weighted score (e.g., a second score x second weight), and a third weighted score (e.g., a third score x third weight) to attack type 209, threat type 211, and home identifier 217, respectively. The fifth column of the table 402 includes illustrative values for the example weighted scores 496 for the attributes 105 indicated in the first column.

The threat report analyzer 150 determines an impact score 109 (e.g., 7.4/10) for the attribute 105 based on the weighted score 496. For example, threat report analyzer 150 determines an impact score 109 (e.g., 7.4/10) based on a sum of weighted scores 496 assigned to attack type 209, threat type 211, and home identifier 217 (e.g., impact score 109 ═ first weighted score + second weighted score + third weighted score).

The illustration 400 includes a table 404. Table 404 indicates illustrative values (e.g., 7.4/10) of the impact score 109 corresponding to the examples of attributes 105 indicated in table 402. Threat report analyzer 150 assigns a rating 498 to impact score 109. The range 480 of the impact score 109 corresponds to various ratings. In a particular aspect, the lookup data 129 indicates a range 480. The diagram 400 includes a table 406 indicating illustrative values for a range 480. Table 406 indicates that the first rating (e.g., unknown), the second rating (e.g., low), the third rating (e.g., medium), and the fourth rating (e.g., high) correspond to a first range (e.g., 0-2.9), a second range (e.g., 3.0-5.9), a third range (e.g., 6.0-7.9, respectively), and a fourth range (e.g., 8.0-10), respectively. For example, the threat report analyzer 150 determines that the impact score 109 corresponds to a third rating (e.g., medium) in response to determining that the third range (e.g., 6.0-7.9) includes the impact score 109 (e.g., 7.4). The threat report analyzer 150 thus determines an impact score 109 based on at least some of the attributes 105.

FIG. 5 includes an illustration 500 of an example 510 of an attribute 105 and an action 115 corresponding to a particular attribute value 540 and 560. In the first example 510, the attribute value 540 indicates that the attribute 105 includes an indicator type 233 having a first value (e.g., an IPv4 address), an inner click quantity 237 having a second value (e.g., 0), a confidence score 107 having a third value (e.g., 7.3), and an impact score 109 having a fourth value (e.g., 7.1).

In the first example 510, the threat report analyzer 150 determines that the action 115 associated with the indicator 103 has a low potential commercial impact because the second value (e.g., 0) of the inside click volume 237 is below the inside click volume threshold (e.g., 10). The threat report analyzer 150 determines that a third value (e.g., 7.3) of the confidence score 107 corresponds to a rating 398 (e.g., a medium rating) based on the range 380 of fig. 3, and determines that a fourth value (e.g., 7.1) of the impact score 109 corresponds to a rating 498 (e.g., a medium rating) based on the range 480 of fig. 4.

In a particular aspect, threat report analyzer 150 selects a more aggressive action as action 115 in response to determining that the first criterion is satisfied. In particular embodiments, threat report analyzer 150 determines that the first criterion is met in response to determining that act 115 has a low potential commercial impact, that confidence score 107 (e.g., 7.3) meets a confidence threshold (e.g., greater than or equal to 6.0), that impact score 109 (e.g., 7.1) meets an impact threshold (e.g., greater than or equal to 6.0), or a combination thereof. In particular embodiments, look-up data 129 of FIG. 1 indicates that a first criterion for selecting a more aggressive action as action 115 is satisfied. In response to determining that the first criteria are met, threat report analyzer 150 sets act 115 to include blocking proxy traffic and email traffic associated with indicator 103 and monitoring proxy traffic, email traffic, Reverse Proxy (RP) traffic, Virtual Private Network (VPN) traffic, and external network logs associated with indicator 103.

In the second example 520, the attribute value 550 indicates that the attribute 105 includes an indicator type 233 having a first value (e.g., a domain name), an inside click quantity 237 having a second value (e.g., 100), a last inside click date 239 having a third value (e.g., 04/2016), a confidence score 107 having a fourth value (e.g., 5.8), and an impact score 109 having a fifth value (e.g., 7.5). In the second example 520, the threat report analyzer 150 determines that a second value (e.g., 100) of the internal click volume 237 indicates a high business impact because the second value is greater than an internal click volume threshold (e.g., 10) and a third value (e.g., 04/2016) indicates a small business impact because the third value is earlier than a click threshold date (e.g., more than one year). The threat report analyzer 150 determines that the action 115 associated with the indicator 103 has a medium potential commercial impact based on the second value and the third value. The threat report analyzer 150 determines that a fourth value (e.g., 5.8) of the confidence score 107 corresponds to a rating 398 (e.g., a low rating) based on the range 380 of fig. 3, and determines that a fifth value (e.g., 7.5) of the impact score 109 corresponds to a rating 498 (e.g., a medium rating) based on the range 480 of fig. 4.

In a particular aspect, threat report analyzer 150 selects an aggressive action as action 115 in response to determining that the second criterion is satisfied. In particular embodiments, threat report analyzer 150 determines that the second criterion is met in response to determining that act 115 has a medium potential commercial impact, that confidence score 107 (e.g., 5.8) does not meet a confidence threshold (e.g., less than 6.0), that impact score 109 (e.g., 7.5) meets an impact threshold (e.g., greater than or equal to 6.0), or a combination thereof. In another embodiment, the threat report analyzer 150 determines that the second criterion is met in response to determining that the action 115 has a medium potential commercial impact and that the impact score 109 (e.g., 7.5) meets the impact threshold (e.g., greater than or equal to 6.0). In particular embodiments, look-up data 129 of fig. 1 indicates that a second criterion for selecting an aggressive action as action 115 is satisfied. In response to determining that the second criteria are met, threat report analyzer 150 sets act 115 to include blocking proxy traffic associated with indicator 103 and monitoring proxy traffic, email traffic, RP traffic, Virtual Private Network (VPN) traffic, and external network logs associated with indicator 103.

In the third example 530, the attribute value 560 indicates that the attribute 105 includes an indicator type 233 having a first value (e.g., an IPv4 address), an inside click quantity 237 having a second value (e.g., 10,000), a last inside click date 239 having a third value (e.g., 2 days ago), a confidence score 107 having a fourth value (e.g., 2.8), and an influence score 109 having a fifth value (e.g., 1.7). In a third example 530, the threat report analyzer 150 determines that a second value (e.g., 10,000) of the internal click volume 237 indicates a high business impact because the second value is greater than an internal click volume threshold (e.g., 10), and a third value (e.g., 2 days ago) indicates a high business impact because the third value is after a click threshold date (e.g., one week ago). The threat report analyzer 150 determines that the action 115 associated with the indicator 103 has a high potential commercial impact based on the second value and the third value.

In a particular aspect, threat report analyzer 150 selects a non-offensive action as action 115 in response to determining that the third criterion is satisfied. In particular embodiments, threat report analyzer 150 determines that the third criterion is met in response to determining that act 115 has a high potential business impact, that confidence score 107 (e.g., 2.8) does not meet a confidence threshold (e.g., less than 6.0), and that impact score 109 (e.g., 1.7) does not meet an impact threshold (e.g., less than 6.0), or a combination thereof. In particular embodiments, lookup data 129 of fig. 1 indicates that a third criterion for selecting a non-aggressive action as action 115 is satisfied. The threat report analyzer 150 sets the action 115 to indicate that no action is taken in response to determining that the third criterion is satisfied. In another example, threat report analyzer 150, in response to determining that the third criterion is satisfied, sets act 115 to include monitoring proxy traffic, email traffic, RP traffic, Virtual Private Network (VPN) traffic, and external network logs associated with indicator 103. In particular embodiments, threat report analyzer 150, in response to determining that the third criterion is satisfied, selects an offensive or middle offensive action as action 115 and adds action 115 to a first action queue (e.g., action queue 119) of actions to be performed in response to user approval.

It should be understood that the example included in fig. 5 is illustrative and not limiting. Threat report analyzer 150 may select various actions to perform based on various attributes associated with indicator 103. Although various thresholds have been described with reference to fig. 1-5, system 100 may include multiple thresholds corresponding to particular ones of attributes 105.

Fig. 6 is a flow diagram of a method 600 of cyber-threat indicator extraction and response. The method 600 may be performed by one or more of the threat report analyzer 150, the first device 140, or the system 100 of fig. 1.

The method 600 includes receiving a cyber-threat report, at 602. For example, threat report analyzer 150 of FIG. 1 receives cyber threat report 101, as described with reference to FIG. 1.

The method 600 also includes extracting an indicator from the cyber-threat report, at 604. For example, threat report analyzer 150 of fig. 1 extracts indicator 103 from cyber threat report 101. Indicator 103 is reported as being associated with a cyber threat.

The method 600 may include extracting an attribute associated with the indicator from the cyber-threat report, at 606. For example, threat report analyzer 150 of fig. 1 extracts indicator type 233, threat type 211, attack type 209, registration date 242, first seen date 201, last seen date 203, first reported date 227, last reported date 229, source reputation score 223 of the first source (e.g., second device 124), description keyword 213, kill chain stage 207, home identifier 217, home confidence 219, or a combination thereof, from network threat report 101, as described with reference to fig. 1-2.

Alternatively or additionally, method 600 may include determining an attribute associated with the indicator at 608. For example, the threat report analyzer 150 of fig. 1 determines the report volume 205, the false positive rate 244, or both, as described with reference to fig. 1-2.

The method 600 further includes determining a confidence score based on the indicator that indicates a likelihood that the indicator is associated with malicious activity at 610. For example, the threat report analyzer 150 of fig. 1 determines the confidence score 107 based on the indicator 103, as further described with reference to fig. 1 and 3. To illustrate, threat report analyzer 150 determines confidence score 107 based on at least one of indicator type 233, threat type 211, attack type 209, registration date 242, first seen date 201, last seen date 203, first reported date 227, last reported date 229, source reputation score 223 of the first source (e.g., second device 124), description key 213, kill chain stage 207, home identifier 217, home confidence 219, report amount 205, or false positive rate 244, as described with reference to fig. 1-3. The confidence score 107 indicates a likelihood that the indicator 103 is associated with malicious activity.

The method 600 further includes determining an impact score indicative of a potential severity of the malicious activity based on the indicator, at 612. For example, the threat report analyzer 150 of fig. 1 determines the impact score 109 based on the indicator 103, as further described with reference to fig. 1 and 2. To illustrate, threat report analyzer 150 determines impact score 109 based on at least one of indicator type 233, threat type 211, attack type 209, registration date 242, first seen date 201, last seen date 203, first reported date 227, last reported date 229, source reputation score 223 of the first source (e.g., second device 124), description key 213, killer chain stage 207, home identifier 217, home confidence 219, report amount 205, or false positive rate 244, as described with reference to fig. 1-2 and 4. The impact score 109 indicates the potential severity of the malicious activity.

The method 600 also includes identifying an action to perform based on the indicator, the confidence score, and the impact score at 614. For example, the threat report analyzer 150 of fig. 1 identifies an action 115 based on the indicator 103, the confidence score 107, and the impact score 109, as described with reference to fig. 1. Act 115 includes blocking network traffic corresponding to indicator 103, monitoring network traffic corresponding to indicator 103, or both, as described with reference to fig. 1 and 5.

Method 600 also includes initiating performance of act 115 at 616. For example, threat report analyzer 150 of FIG. 1 initiates performance of act 115, as described with reference to FIG. 1. To illustrate, threat report analyzer 150 may perform act 115 independently of user input (e.g., without user input) indicating that act 115 is to be performed.

Thus, the method 600 enables identifying the action 115 corresponding to the indicator 103 based on the likelihood that the indicator 103 is associated with malicious activity and the potential severity of the malicious activity. Act 115 may be performed without receiving any user input indicating that act 115 is to be performed. The earlier execution of action 115 enables the prevention of corresponding malicious activity.

Fig. 7 is an illustration of a block diagram of a computing environment 700 that includes a computing device 710, the computing device 710 configured to support aspects of computer-implemented methods and computer-executable program instructions (or code) according to the present disclosure. For example, computing device 710, or portions thereof, is configured to execute instructions to initiate, perform, or control one or more operations described with reference to fig. 1-6.

The computing device 710 includes a transceiver 722. The transceiver 722 includes a transmitter antenna 704 and a receiver antenna 708. Computing device 710 includes a processor 720. In a particular aspect, processor 720 includes threat report analyzer 150. The processor 720 is configured to communicate with the system memory 730, the one or more storage devices 740, the one or more input/output interfaces 750, the one or more communication interfaces 760, or a combination thereof. The system memory 730 includes volatile memory devices (e.g., Random Access Memory (RAM) devices), non-volatile memory devices (e.g., Read Only Memory (ROM) devices, programmable read only memory (prom), and flash memory), or both. The system memory 730 stores an operating system 732, which may include a basic input/output system for starting the computing device 710 and a full operating system for enabling the computing device 710 to interact with users, other programs, and other devices. System memory 730 stores system (program) data 736. In a particular aspect, the memory 142 of FIG. 1 includes system memory 730, one or more storage devices 740, or a combination thereof.

The system memory 730 includes one or more application programs 734 that are executable by the processor 720. By way of example, one or more application programs 734 include instructions that are executable by processor 720 to initiate, control, or perform one or more operations described with reference to fig. 1-6. To illustrate, the one or more applications 734 include instructions executable by the processor 720 to initiate, control, or perform one or more operations described with reference to the threat report analyzer 150.

The processor 720 is configured to communicate with one or more storage devices 740. For example, the one or more storage devices 740 include non-volatile storage devices such as magnetic disks, optical disks, or flash memory devices. In a particular example, the storage device 740 includes both removable and non-removable memory devices. Storage device 740 is configured to store an operating system, images of the operating system, application programs, and program data. In a particular aspect, the system memory 730, the storage device 740, or both, include tangible computer-readable media. In a particular aspect, the one or more storage devices 740 are external to the computing device 710.

The processor 720 is configured to communicate with one or more input/output interfaces 750, which enable the computing device 710 to communicate with one or more input/output devices 770 to facilitate user interaction. In a particular aspect, the input/output interface 750 includes the input interface 144, the output interface 148, or both of fig. 1. The processor 720 is configured to detect interaction events based on user input received via the input/output interface 750. Alternatively, the processor 720 is configured to send the display to the display device 122 of fig. 1 via the input/output interface 750. Processor 720 is configured to communicate with a device or controller 780 via one or more communication interfaces 760. For example, the one or more communication interfaces 760 include the communication interface 146 of fig. 1. In an illustrative example, a non-transitory computer-readable storage medium (e.g., system memory 730) includes instructions that, when executed by a processor (e.g., processor 720), cause the processor to initiate, perform, or control operations. The operations include one or more of the operations described with reference to fig. 1-6.

Further, the present disclosure includes embodiments according to the following clauses:

clause 1. an apparatus, comprising: a communication interface configured to receive a cyber-threat report; and a processor configured to: extracting an indicator from the cyber-threat report, the indicator being reported as being associated with the cyber-threat; determining a confidence score based on the indicator indicating a likelihood that the indicator is related to malicious activity; determining an impact score indicative of a potential severity of the malicious activity based on the indicator; identifying an action to perform based on the indicator, the confidence score, and the impact score, wherein the action includes blocking network traffic corresponding to the indicator or monitoring network traffic corresponding to the indicator; and initiating execution of the action.

Clause 2. the device of clause 1, wherein the indicator comprises an Internet Protocol (IP) address, an email subject, a domain name, a Uniform Resource Identifier (URI), a Uniform Resource Locator (URL), a file name, a message digest algorithm 5(MD5) hash, a file path, or a combination thereof.

Clause 3. the device of any of clauses 1 or 2, wherein the confidence score is based on one or more attributes associated with the indicator, the one or more attributes including a date seen first, a date seen last, an indicator age, a registration date, a date reported first, a date reported last, a report source, a source reputation score, a report volume, a confidence of attribution, a particular keyword, or a false positive rate.

Clause 4. the apparatus of any of clauses 1-3, wherein the impact score is based on one or more attributes associated with the indicator, the one or more attributes including an indicator type, a reported volume, a killer chain stage, a threat type, an attack type, a specific keyword, or a home identifier.

Clause 5. the device according to any of clauses 1-4, wherein the processor is further configured to add an indicator to a location in the response queue that is based on the confidence score and the impact score.

Clause 6. the device of clause 5, further comprising an output interface configured to be coupled to a display device, wherein the processor is further configured to, prior to initiating performance of the action: generating a Graphical User Interface (GUI) based on the response queue, the GUI indicating a location of the indicator in the response queue; and providing the GUI to the display device through the output interface.

Clause 7. a method, comprising: receiving, at a device, a cyber-threat report; extracting, at the device, an indicator from the cyber-threat report, the indicator reported as being associated with the cyber-threat; determining, based on the indicator, a confidence score indicating a likelihood that the indicator is associated with malicious activity; determining an impact score indicative of a potential severity of the malicious activity based on the indicator; identifying an action to perform based on the indicator, the confidence score, and the impact score, wherein the action includes blocking network traffic corresponding to the indicator or monitoring network traffic corresponding to the indicator; and initiating execution of the action at the device.

Clause 8. the method of clause 7, wherein the indicator comprises an Internet Protocol (IP) address, a virus signature, an email address, an email subject, a domain name, a Uniform Resource Identifier (URI), a Uniform Resource Locator (URL), a file name, a message digest algorithm 5(MD5) hash, a file path, or a combination thereof.

Clause 9. the method of any of clauses 7 or 8, wherein the confidence score and the impact score are based on one or more attributes associated with the indicator.

Clause 10. the method of any of clauses 7-9, further comprising determining an attribute associated with the indicator based on the cyber-threat report, wherein the attribute comprises an indicator type, a threat type, an attack type, a first seen date, a last seen date, a first reported date, a last reported date, a report source, a specific keyword, a killer chain stage, a home identifier, or a home confidence, and wherein at least one of the confidence score or the impact score is based on the attribute.

The method of any of clauses 7-10, further comprising determining an attribute associated with the indicator, the attribute comprising a reporting volume or a false positive rate, wherein the reporting volume comprises a count indicating reporting of the indicator in association with malicious activity, wherein the false positive rate is based on a first number of times the indicator is detected as being associated with non-malicious activity and a second number of times the indicator is detected as being associated with malicious activity, and wherein at least one of the confidence score or the impact score is based on the attribute.

Clause 12. the method of any of clauses 7-11, wherein the confidence score is based on one or more attributes associated with the indicator, the one or more attributes including a date seen first, a date seen last, an indicator age, a date reported first, a date reported last, a report source, a source reputation score, a report amount, a confidence in attribution, a specific keyword, or a false positive rate.

Clause 13. the method of any of clauses 7-12, wherein the impact score is based on one or more attributes associated with the indicator, the one or more attributes including an indicator type, a reported volume, a killer chain stage, a threat type, an attack type, a specific keyword, or a home identifier.

Clause 14. the method of any of clauses 7-13, further comprising adding, at the device, an indicator to a location in the response queue, the location based on the confidence score and the impact score.

Clause 15. the method of clause 14, further comprising: prior to initiating execution of the action: generating, at the device, a Graphical User Interface (GUI) based on the response queue, the GUI indicating a location of the indicator in the response queue; and providing the GUI from the device to the display device.

Clause 16. the method of clause 7, further comprising: generating, at a device, a Graphical User Interface (GUI) indicating an action; the GUI is provided from the device to a display device, wherein execution of the action is initiated in response to receiving a user input indicating that the action is to be performed.

Clause 17. the method of any of clauses 7-16, wherein initiating performance of the action comprises scheduling performance of the action, wherein the method further comprises: generating, at a device, a Graphical User Interface (GUI) indicating an action; providing the GUI from the device to a display device; and in response to receiving a user input indicating that the action is not to be performed, canceling the performance of the action.

Clause 18. the method of any of clauses 7-17, wherein initiating performance of the action comprises performing the action independently of receiving the user input.

Clause 19. a computer-readable storage device storing instructions that, when executed by a processor, cause the processor to perform operations comprising: receiving a network threat report; extracting an indicator from the cyber-threat report, the indicator being reported as being associated with the cyber-threat; determining, based on the indicator, a confidence score indicating a likelihood that the indicator is associated with malicious activity; determining an impact score indicative of a potential severity of the malicious activity based on the indicator; identifying an action to perform based on the indicator, the confidence score, and the impact score, wherein the action includes blocking network traffic corresponding to the indicator or monitoring network traffic corresponding to the indicator; and initiating execution of the action.

Clause 20. the computer-readable storage device of clause 19, wherein the operations further comprise extracting an attribute associated with the indicator from the cyber-threat report, wherein at least one of the confidence score or the impact score is based on the attribute, and wherein the attribute comprises an indicator type, a threat type, an attack type, a registration date, a date seen first, a date seen last, a date reported first, a date reported last, a report source, a specific keyword, a killer chain stage, a home identifier, or a home confidence.

The above examples are illustrative and do not limit the disclosure. It should be understood that numerous modifications and variations are possible in accordance with the principles of the present invention.

The illustrations of the examples described herein are intended to provide a general understanding of the structure of various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments will be apparent to those of skill in the art upon reading this disclosure. Other embodiments may be utilized or derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. For example, method operations may be performed in a different order than illustrated, or one or more method operations may be omitted. The present disclosure and figures are, therefore, to be regarded as illustrative rather than restrictive.

Moreover, although specific examples have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar results may be substituted for the specific implementations shown. This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description.

The Abstract of the disclosure is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing detailed description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. The above examples illustrate but do not limit the disclosure. It should also be understood that numerous modifications and variations are possible in light of the principles of this disclosure. As the following claims reflect, the claimed subject matter may be directed to less than all of the features of any of the disclosed examples. Accordingly, the scope of the disclosure is defined by the following claims and their equivalents.

30页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:渗透测试的方法及装置、存储介质、电子装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类