Enforcing segmentation can be labor-intensive: Here’s how GenAI can help
Generative AI has the potential to increase the effectiveness of security teams by minimising the time they spend on time-consuming, administrative jobs, allowing them to focus on the higher priority areas. Long-term success requires tangible use cases that improve an organisations’ security posture.
One area generative AI can help is in assisting security teams with their Zero Trust initiatives. Zero Trust is a security strategy based on the principle of “never trust always verify”. Rather than trusting users and devices within the network perimeter, they must be validated before they are granted access to resources.
Segmentation and microsegmentation are key building blocks for implementing Zero Trust. Segmentation refers to the practice of dividing IT environments into isolated parts with their own access controls. Microsegmentation is even more granular, dividing individual devices or applications into their own segments. This stops lateral movement from adversaries if a network is compromised, as only those who are authorised can access different segments.
However, implementing segmentation across an organisation comes with challenges. In Akamai’s 2023 State of Segmentation report, respondents agreed that segmentation is effective in keeping assets protected, allowing them to contain and mitigate ransomware attacks more quickly. But a lack of skills or expertise were identified as a hurdle to segmenting networks, indicating a talent shortage in this area.
AI-powered asset labeling is a good example of using AI in security, making software-based segmentation and microsegmentation simpler to implement. Security policies are applied based on the label given to a specific component, machine, or groups of machines you’re trying to protect. However, organisations may not have the complete metadata necessary to label a component correctly, making labeling a labour-intensive process, adding security teams’ workload, and slowing down the implementation of segmentation. Errors in labeling can also lead to the incorrect enforcement of security policies.
This is an area that generative AI can assist in. Akamai Guardicore Platform includes an AI Labeling feature that uses generative AI capabilities to suggest labels for different components based on process communications identified by a Large Language Model (LLM).
These suggestions come with a confidence score and explanation for why the label was chosen, and IT teams can then use this information to assign the correct label to an asset.
This removes much of the manual process from labeling, meaning IT teams can focus on other areas, accelerating and simplifying their Zero Trust initiatives.
This article is sponsored by Akamai
