White Star Capital Announces the First Close of its $30M Digital Asset Fund 20294

White Star Capital, a global multi-stage technology venture capital investment platform, today announced the first close of its new $30M Digital Asset Fund. The new fund will invest in crypto-networks and blockchain-enabled businesses at each layer of the tech stack including protocols, infrastructure and applications. The Digital Asset Fund will deploy between $500,000 and $2.0 million in initial investments into 15-20 companies with a core focus in Europe and North America. The team will take a research-driven approach to maximize returns through early access in both equity and tokens. To date, the fund has made two investments: dfuse, a blockchain API company that helps developers build performant applications by organizing the world’s decentralized data; and Multis, a Y Combinator backed company, which offers “crypto-first” business banking accounts with an aspiration to bridge the gap between Decentralized and Legacy Finance.

The Digital Asset Fund marks White Star Capital’s first specialty fund and will be run by New York based General Partner Sep Alavi and supported by Principals Thomas Klocanas in New York and Sanjay Zimmerman in Toronto. The fund will also benefit from White Star Capital’s extended team in London and Paris, as well as a network of venture partners in Tokyo, Hong Kong, San Francisco and Switzerland.

As an early-stage investor since 2013, Sep has been investing in crypto assets and involved with blockchain companies for the past five years. Previous to joining White Star Capital, he spent 12 years working in capital markets in Paris, Chicago and New York City. Thomas, started his career as a research analyst in the fintech and financial sector at Barclays in London before spending two years at Consensys in New York and consulting for various blockchain projects such as Coinhouse and Blockstack. He joined White Star Capital eighteen months ago. Sanjay joined White Star Capital in Montreal in 2017 and was recently promoted to Principal after completing his MBA at INSEAD in Singapore, Philadelphia and Fontainebleau.

“Digital Assets are gaining adoption, and venture valuations and token round distributions have rationalized, which is an attractive entry point,” said Sep Alavi. “Incumbents are rapidly entering the space and embracing the next wave of innovation. At WSC Digital Asset, we are looking to partner with the next generation of startups building global networks and frictionless business models. We are actively interested in investing in crypto protocols, infrastructure and middleware, privacy, financial, gaming, and social use cases”

While regulation continues to evolve pragmatically and positively around the sector, enterprise and institutional giants like Fidelity, JP Morgan, Facebook, Walmart and Square are embracing blockchain technology at an accelerating pace and this is just the beginning. According to a report by Andreesen Horowitz, startup and developer activity in the space have been growing 53.9% and 74.4% CAGR since 2010.

“White Star Capital’s international footprint and network, as well as our team’s blockchain expertise position us well to make an impact in the digital asset sector,” said Eric Martineau-Fortin, Founder and Managing Partner of White Star Capital. “Our platform and network will give us access to exceptional deal flow from various companies across blockchain use cases.”

Previous ArticleNext Article

AI Infrastructure Company EverMind Released EverMemOS, Responding to Profound Challenges in AI 4256

AI infrastructure company EverMind has recently released EverMemOS, an open-source Memory Operating System designed to address one of artificial intelligence’s most profound challenges: equipping machines with scalable, long-term memory.

The Memory Bottleneck

For years, large language models (LLMs) have been constrained by fixed context windows, a limitation that causes “forgetfulness” in long-term tasks. This results in broken context, factual inconsistencies, and an inability to deliver deep personalization or maintain knowledge coherence. The issue extends beyond technical hurdles; it represents an evolutionary bottleneck for AI. An entity without memory cannot exhibit behavioral consistency or initiative, let alone achieve self-evolution. Personalization, consistency, and proactivity, which are considered the hallmarks of intelligence, all depend on a robust memory system.

There is a consensus that memory is becoming the core competitive edge and defining boundary of future AI. Yet existing solutions, such as Retrieval-Augmented Generation (RAG) and fragmented memory systems, remain limited in scope, failing to support both 1-on-1 companion use cases and complex multi-agent enterprise collaboration. Few meet the standard of precision, speed, usability, and adaptability required for widespread adoption. Equipping large models with a high-performance, pluggable memory module remains a core unmet demand across AI applications.

Discoverative Intelligence

“Discoverative Intelligence” is a concept proposed in late 2025 by entrepreneur and philanthropist Chen Tianqiao. Unlike generative AI, which mimics human output by processing existing data, Discoverative Intelligence describes an advanced AI form that actively asks questions, forms testable hypotheses, and discovers new scientific principles. It prioritizes understanding causality and underlying principles over statistical patterns, a shift Chen argues is essential to achieving Artificial General Intelligence (AGI).

Chen contrasted two dominant AI development paths: the “Scaling Path,” which relies on expanding parameters, data, and compute power to extrapolate within a search space, and the “Structural Path,” which focuses on the “cognitive anatomy” of intelligence and how systems operate over time.

Discoverative Intelligence falls into the latter category, built on a brain-inspired model called Structured Temporal Intelligence (STI) that requires five core capabilities in a closed loop: neural dynamics (sustained, self-organizing activity to keep systems “alive”), long-term memory (storing and selectively forgetting experiences to build knowledge), causal reasoning (inferring “why” events occur), world modeling (an internal simulation of reality for prediction), and metacognition & intrinsic motivation (curiosity-driven exploration, not just external rewards).

Among these capabilities, long-term memory serves as the vital link between time and intelligence, highlighting its indispensable role in the path toward achieving true AGI.

EverMind’s Answer

EverMemOS is EverMind’s answer to this need: an open-source Memory Operating System designed as foundational technology for Discoverative Intelligence. Inspired by the hierarchical organization of the human memory system, EverMemOS features a four-layer architecture analogous to key brain regions: an Agentic Layer (task planning, mirroring the prefrontal cortex), a Memory Layer (long-term storage, like cortical networks), an Index Layer (associative retrieval, drawing from the hippocampus), and an API/MCP Interface Layer (external integration, serving as AI’s “sensory interface”).

The system delivers breakthroughs in both scenario coverage and technical performance. It is the first memory system capable of supporting both 1-on-1 conversation use cases and complex multi-agent enterprise collaboration. On technical benchmarks, EverMemOS achieved 92.3% accuracy on LoCoMo (a long-context memory evaluation) and 82% on LongMemEval-S (a suite for assessing long-term memory retention), significantly surpassing prior state-of-the-art results and setting a new industry standard.

The open-source version of EverMemOS is now available on GitHub, with a cloud service version to be launched late this year. The dual-track model, combining open collaboration with managed cloud services, aims to drive industry-wide evolution in long-term memory technology, inviting developers, enterprises, and researchers to contribute to and benefit from the system.

About EverMind

EverMind is redefining the future of AI by solving one of its most fundamental limitations: long-term memory. Its flagship platform, EverMemOS, introduces a breakthrough architecture for scalable and customizable memory systems, enabling AI to operate with extended context, maintain behavioral consistency, and improve through continuous interaction.

To learn more about EverMind and EverMemOS, please visit:
Website: https://evermind.ai/
GitHub: https://github.com/EverMind-AI/EverMemOS
X: https://x.com/EverMindAI
Reddit: https://www.reddit.com/r/EverMindAI/

Salt Security Brings MCP Threat Protection to AWS WAF, Blocking AI Agent Abuse in Real Time 4139

Salt Security, the leader in API security, today announced it is extending its patented, award-winning API behavioral threat protection to detect and block malicious intent targeting Model Context Protocol (MCP) servers deployed within the AWS ecosystem. Building on the recent launch of Salt’s MCP Finder technology, Salt now enables organizations to identify external misuse and abuse of MCP servers by AI agents and attackers, and automatically block these threats using its integration with AWS WAF.

MCP servers have rapidly become a key component of enterprise AI architecture, enabling LLMs and autonomous agents to call APIs, execute tools, and complete workflows. But they also represent a new threat vector. Deployed without central oversight and often exposed to the internet, MCP servers are increasingly targeted by adversaries for unauthorized access to critical data and system access.

With this new capability, Salt enables customers to use their existing AWS WAF deployments to block attacks on MCP infrastructure. The protections are informed by real-time behavioral threat data from Salt’s platform.

“Most organizations don’t even know how many MCP servers they have, let alone which ones are exposed or being abused,” said Nick Rago, VP of Product Strategy at Salt Security. “This capability lets them take action quickly, using existing controls to prevent real threats without needing to deploy new infrastructure.”

The solution is based on Salt’s MCP Finder technology, which provides full visibility into the MCP layer across external, internal, and shadow deployments. By combining that discovery with AWS WAF, customers can:

  • Automatically block MCP misuse and abuse before it impacts applications
  • Discover previously unknown or unmanaged MCP implementations and ensure traffic is routed through AWS WAF for inspection and protection
  • Extend AWS WAF edge protection to the AI action layer
  • Apply intent-based behavioral threat detection to stop attacks targeting key AI infrastructure that traditional tools miss
  • Continuously update protections based on evolving attacker tactics

Salt Security is showcasing these capabilities at AWS re:Invent 2025. The integration is available now as part of the Salt Security API Protection Platform.

About Salt Security

Salt Security secures the APIs that power today’s digital businesses. Salt delivers the fastest API discovery in the industry—surfacing shadow, zombie, and unknown APIs before attackers find them. The company’s posture governance engine and centralized Policy Hub automate security checks and enforce safe API development at scale. With built-in rules and customizable policies, Salt makes it easy to stay ahead of compliance and reduce API risk. Salt also uses machine learning and AI to detect threats early, giving companies a critical advantage against today’s sophisticated API attacks. The world’s leading organizations trust Salt to find API gaps fast, shut down risks, and keep their businesses moving. Learn more at https://salt.security

NeuralTrust introduces Guardian Agents: the first AI agents built to protect other agents 3175

NeuralTrust, the security platform for AI Agents and LLM applications, today announced Guardian Agents, a new class of autonomous security agents designed to defend enterprise AI systems in real time. As organizations deploy thousands of AI agents, each connected to tools, APIs, and sensitive workflows, Guardian Agents provide a dedicated, agent-native layer of protection.

Unlike traditional security controls built for static applications, Guardian Agents are active defenders. They monitor agent behavior, intercept unsafe actions, enforce tool-use policies, scan for vulnerabilities, and stop attacks before they escalate.

A new force to counter a new threat landscape

Enterprises today face an unprecedented operational challenge. AI agents can write code, move data, trigger workflows, and interact with external systems. At scale, the risk surface becomes ungovernable:

  • A single agent may access hundreds of tools
  • One misconfigured workflow can leak sensitive data
  • A prompt injection can escalate privileges or bypass guardrails

Guardian Agents act as a protective layer around this ecosystem. Instead of relying solely on static filters or manual governance, NeuralTrust gives security teams their own force of autonomous defenders to act at machine speed.

How Guardian Agents work

Rather than blocking innovation, Guardian Agents sit alongside production agents to ensure safe execution. They:

  • Stop complex attacks such as prompt injection, privilege escalation, and malicious tool use
  • Prevent data leaks by inspecting inputs, outputs, and tool interactions
  • Enforce granular policies defining exactly which tools, actions, and permissions each agent can use
  • Scan AI applications to uncover vulnerabilities, unsafe flows, and misconfigurations
  • Analyze behavior to detect anomalies and emerging threats
  • Leverage a continuously updated threat database engineered specifically for AI agents

Guardian Agents are deployed through NeuralTrust’s high-performance security platform, which processes billions of requests every month. Purpose-built for LLM and agent workloads, it delivers industry-leading performance with minimal latency, and works across all clouds, models, and integrations.

“Autonomous agents have changed the threat landscape. Defending them requires security that moves just as fast,” said Joan Vendrell, Co-Founder and CEO of NeuralTrust. “Guardian Agents give organizations a way to stay ahead of attacks, enforce policy, and deploy AI safely at scale.”

About NeuralTrust

NeuralTrust is the leading platform for securing and scaling AI Agents and LLM applications. Recognized by the European Commission as a champion in AI security, we partner with global enterprises to protect their most critical AI systems. Our technology detects vulnerabilities, hallucinations, and hidden risks before they cause damage, empowering teams to deploy AI with confidence.

Learn more at neuraltrust.ai.

Allstate launches Scam Protection to safeguard workers’ finances from rising cyber and crypto fraud 2392

  • Allstate launches Scam Protection, a new workplace benefit that covers modern digital threats including scams, ransomware and cryptocurrency theft, featuring first-of-its-kind reimbursement up to $50,000 for verified scam losses.
  • Employers turn to Allstate amid record-breaking cybercrime trends. The insurer saved customers $33.2 million last year in potential losses, with holiday shopping periods like Black Friday and Cyber Monday driving the steep spikes in fraud claims.
  • Allstate Scam Protection is available to nearly 7 million people and their families through employer benefits. Companies are adding it to benefits packages to safeguard employees’ personal finances during open enrollment.
  • Coverage extends beyond employees to protect family members, including aging parents 65 and older, regardless of where they reside.

Allstate is helping employers safeguard workers’ finances and fight cybercrime with Allstate Scam Protection, including a first-of-its-kind reimbursement up to $50,000 when money, including cryptocurrency, is stolen through scams. The coverage is available exclusively through workplace benefits, reaching nearly 7 million employees and their families during open enrollment. Employees can check their workplace benefits or ask their HR team to see if Allstate Scam Protection is offered and how to enroll.

Cybercrime is surging, straining household finances and workplace productivity. Allstate Identity Protection recovered $33.2 million in potential identity theft losses for customers in 2024, with the fourth quarter alone costing $9.8 million. New account fraud drove $23.3 million in actual losses last year, while fraudulent applications added $5.5 million in potential exposure. Each identity theft case takes time and resources to resolve, draining employee time and productivity that often overlaps with the workday.

This new protection comes as cybercrime spikes during the holiday shopping season, especially around Black Friday and Cyber Monday, when phishing texts, fake retailers and social media scams are rampant.

“Scams cost employees time, money and productivity, impacting their families and their finances,” said Caroline Slane, senior vice president of business operations at Allstate Identity Protection. “This new scam protection fills a gap by putting money back in people’s hands so they can get back on their feet.”

What does Allstate Scam Protection cover?

Allstate Scam Protection goes beyond traditional identity theft products by covering new ways criminals steal money online with fewer exclusions. Here’s what’s included:

  • Reimbursement for scams, digital crimes and social engineering: Includes unlimited claims up to $50,000 per year.
  • Cryptocurrency theft coverage: Reimburses stolen crypto resulting from cybercrime up to $50,000 per year.
  • Web, email and mobile protection: Ensures safe web browsing, protects and alerts against fraudulent texts, emails, links, robocalls and robotexts.
  • Family coverage: The benefit covers unlimited household members, including teens and seniors 65 and older, providing protection for those most vulnerable, even if they live outside the home.
  • Scam takedown: Customers can report malicious URLs directly to Allstate’s cyber experts for removal.
  • Personal coaching: Individual sessions with Allstate Identity Protection specialists to build tailored defense plans for individuals and families.

Why are employers adding Allstate Scam Protection to benefits packages?

Allstate Identity Protection products are already widely available through employee benefits packages at 4,500 companies including a quarter of the Fortune 500. Employers are adding Allstate Scam Protection because:

  • Fraud losses reached $12.5 billion in 2024, according to the FTC. While the number of fraud reports stayed flat, far more people lost money compared to the year prior.
  • AI has supercharged scams. AI fuels phishing emails, impersonation texts and deepfake voices or videos that are nearly impossible to spot.
  • Cybercrime impacts workplace productivity. Workers are vulnerable whenever and wherever they use devices, and they manage fraud recovery during work hours when banks and government agencies are open.

How can consumers protect themselves during holiday shopping season?

Holiday scams surge around Black Friday and Cyber Monday. Allstate recommends:

  • Slow down before buying. Beware of hard-to-find items on third-party sites and avoid sellers demanding gift card payments.
  • Protect your wallet and identity while shopping online. Pay with credit cards, keep track of purchases, update device software and avoid clicking links in unfamiliar texts or emails.
  • Use digital wallets in stores. They encrypt payment info and reduce the chance of card skimming or data theft.

About Allstate

The Allstate Corporation protects people from life’s uncertainties with affordable, simple and connected protection for autos, homes, electronic devices, and identities. Products are available through a broad distribution network including Allstate agents, independent agents, major retailers, online, and at the workplace. Allstate has more than 209 million policies in force and is widely known for the slogan “You’re in Good Hands with Allstate.” For more information, visit www.allstate.com.

Astreya Unveils New Wave of Enterprise AI Agents, turning Operational Signals into Real Insights and Rapid Action 1898

Astreya, the world’s leading AI-First global IT managed services provider for Digital and IT infrastructure, is accelerating its mission to make AI and automation more accessible for businesses everywhere. By publishing ready-to-use AI agents across multiple marketplaces, including the ServiceNow Store, Astreya is helping organizations adopt AI faster and turn automation into measurable results. The initiative reflects the company’s broader commitment to improving efficiency, reducing manual workloads, and driving smarter operations across cloud, workplace, and IT environments.

Astreya recently served as a Prize Partner at A2HACKFEST 2K25 in Bengaluru, underscoring its commitment to investing in the next generation of AI innovation and talent. The company also participated in Google Cloud’s Agentic AI Day Hackathon, one of India’s largest developer events, where all four of its teams ranked among the top 700 submissions from over 9,100 entries and 57,000 participants.

Astreya’s “Soup Developers” team advanced to the Top 15 finalists, ranking among the top one percent of global submissions. Their concept, a modular ecosystem of 20 specialized AI agents, was designed to redefine financial planning by automating budgeting, cash-flow forecasting, market research, and investment strategy. The project stood out for its use of the Model Context Protocol (MCP), which allows agents to access real-time financial data, simulate complex market scenarios, and deliver personalized insights aligned to each user’s objectives.

Beyond the competition, Astreya has already released four production-ready AI solutions, powered by 21 specialized agents and advanced large language models on the ServiceNow Store. These agents empower IT teams to resolve issues faster, eliminate repetitive tasks, and increase productivity, freeing them to focus on higher-value, strategic initiatives that drive business growth.

  • TicketLens (Newly Published) — A certified AI solutions that delivers unified, single-pane insights across incidents and linked records, enhancing root cause analysis and resolution efficiency in dynamic ticketing environments. It provides one-click summaries of incidents, child incidents, problems, and changes; monitors CI health and completeness; and correlates related records to uncover potential root causes, recommends remediation steps, and will soon evolve toward guided and automated resolution, bringing engineers closer to faster, more accurate fixes within the ServiceNow environment.
  • Astraix — A proactive IT assistant that can analyze an image of an issue to identify the problem, recommend dynamic knowledge articles, trigger automated actions, and predictively assign the incident to the right group.
  • Attachment Summarizer — Reads and extracts the key points from ticket attachments, then updates work notes and surfaces relevant knowledge so teams don’t waste hours sifting through files.
  • Intelligent Knowledge Builder & Optimizer — Automates the creation, deduplication, and quality checks of knowledge articles, ensuring knowledge bases remain current and trustworthy.

Each solution removes the friction from IT support, enabling agents to resolve issues faster, with greater precision, at a consistently higher standard.

As part of its early adoption initiative, Astreya is offering its Agentic AI solutions free of charge for the next 3–6 months, enabling customers to experience their full potential, accelerate automation outcomes, and share actionable feedback through the ServiceNow Store.

AI Automation Assessment: Bridging Vision and Velocity

Astreya has launched the AI Automation Readiness, Maturity & Coverage Assessment, a vendor-neutral framework that helps enterprises identify automation blind spots, evaluate their current state, and accelerate AIOps adoption. The program delivers a maturity and tool-gap analysis, AI readiness scores, benefit projections, and a clear roadmap for transformation.

To complement this, Astreya’s Enterprise AI Services team introduced RapidPulse, a free, five-minute self-assessment that measures readiness across five pillars—Strategy, Tools & Platform, Data & Infrastructure, Process, and People—and provides an instant snapshot of AI and automation maturity.

By revealing where automation delivers the most value, Astreya enables organizations to prioritize investments, strengthen operational resilience, and move confidently from manual workflows to intelligent, autonomous IT operations.

Romil Bahl, CEO, Astreya: Most enterprises are still experimenting with AI in isolated pilots. The problem is that those efforts rarely scale. They stay in the lab, disconnected from the systems that drive real work. That means missed efficiency gains, higher costs, and teams carrying more manual effort than they should. By pairing agent-native applications with structured assessments and deployment playbooks, we embed AI directly where it matters, making businesses faster, leaner, and more resilient. Our new ServiceNow AI agents are a clear example of that shift,” said Romil Bahl, CEO, Astreya.

Expanding the AI Ecosystem with a Databricks Marketplace Debut

Building on its growing momentum in AI and automation, Astreya has launched its first solution on the Databricks Marketplace: Data Trust and Stats Intelligence (DTSi), now available for users to explore at no cost. This milestone also includes recognition as a validated Databricks Data Partner, reinforcing the company’s continued investment in scalable, real-world AI and data innovation.

Powered by five Gemini-enabled AI agents, DTSi is designed to help teams turn complex, unstructured datasets into trusted, actionable intelligence. The solution applies more than 15 advanced analytical and statistical techniques, from anomaly detection and correlation mapping to predictive modeling and hypothesis testing, to surface insights that accelerate better decision-making.

This expansion into the Databricks ecosystem reflects the same guiding principle behind Astreya’s multi-platform AI strategy: make AI easier to adopt, integrate, and scale. By delivering marketplace-ready solutions that unify data intelligence, automation, and AI-driven analysis, the company is helping enterprises move from raw data to confident action with greater speed and clarity.

Hyderabad: A Strategic Launchpad in a Global Model

The Enterprise AI Innovation Center in Hyderabad serves as the nucleus for applied research, experimentation, and rapid development of enterprise-grade AI solutions.

The center focuses on turning ideas into deployable outcomes, from developing AI agents and automation accelerators to creating point solutions tailored for business and IT operations. It brings together data engineers, AI scientists, and automation architects to prototype, validate, and scale solutions that directly address real-world enterprise challenges.

Beyond R&D, the center also serves as a client co-innovation space, where teams jointly explore use cases, assess AI readiness, and design adoption roadmaps that bridge experimentation and enterprise deployment.

“Our Hyderabad Innovation Center is a springboard for enterprise AI, where we validate agent-native ideas, run assessments to surface real value, and then harden solutions for production. Our Enterprise AI capability is global, but hubs like Hyderabad help us compress the cycle from prototype to deployment so clients see measurable outcomes faster,” explained Jothiganesh Nagarajan, COO, Astreya.

Looking ahead

Through strategic partnerships, agent-based innovation, and scaled engineering, Astreya remains focused on one core priority: turning AI into measurable enterprise value. The company continues to invest in multi-agent design, platform-native integration, and specialized engineering talent to help clients move beyond pilots and proofs of concept toward AI solutions that scale, deliver, and stick.

About Astreya

Astreya is a global IT managed services provider that powers enterprises by designing, deploying, and managing complex technology environments. We deliver end-to-end solutions across hybrid cloud, data centers, network infrastructure, and the digital workplace. Intelligent automation and AI run through everything we build to drive efficiency, accelerate service delivery, and clear barriers to growth for our customers.

Learn more at www.astreya.com

Nirmata Launches AI Platform Engineer to Automate Cloud-Native Infrastructure Governance and Management 1679

AI-driven solution delivers enterprise Kubernetes management with automated policy-as-code for security, compliance, and governance

Nirmata, creator of Kyverno and leader in policy-as-code innovation, today announced the general availability of its AI Platform Engineering Assistant, an AI-powered solution that automates Kubernetes security, compliance, and workflow management across Kubernetes, Infrastructure as Code (IaC), and hybrid-cloud environments.

As organizations accelerate AI-assisted software development, platform teams must keep pace with increasingly complex infrastructure. Industry data shows a 30x acceleration in software creation and over $350 billion in AI infrastructure investment, yet nearly half of enterprises cite critical platform engineering skill gaps. Nirmata’s AI assistant empowers platform teams by automating the time-intensive tasks of Kubernetes policy management and securing infrastructure, enabling them to scale.

“Platform engineering has become both the bottleneck and the enabler of the AI future,” said Ritesh Patel, Vice President of Product at Nirmata. “Without scalable governance, innovation stalls under complexity and risk. With AI-powered governance, Nirmata transforms policy-as-code into a continuous, intelligent system that enforces compliance without slowing teams down.”

Built on the proven Kyverno policy-as-code engine—the CNCF-incubating project for Kubernetes, IaC, and cloud—the assistant uses a multi-agent architecture to automate policy authoring, detection, and remediation, creating a system for continuous Kubernetes governance and compliance that keeps humans in the loop while automating the most time-consuming tasks.

Key capabilities include:

  • Copilot interface: Conversational AI that turns hours-long investigation cycles into minutes. Engineers use natural language to instantly pull detailed insights, data, and reports about their infrastructure and generate enforcement actions.
  • Policy-as-Code Agent: Transforms natural language rules into validated Kyverno policy-as-code for Kubernetes and IaC, ensuring each rule aligns with security and compliance standards. This streamlines policy creation and eliminates common syntax errors while helping platform teams standardize governance across clusters and pipelines.
  • Remediation Agent: Detects misconfigurations and policy violations, then generates and validates secure fixes with human verification in the loop. This drastically reduces the time engineers spend diagnosing and correcting issues while ensuring every change remains compliant and secure.

Together, these agents deliver AI-powered Kubernetes security through a collaborative, intelligent system that continuously strengthens security, compliance, and operational trust while freeing engineers to focus on higher-value innovation. The AI Platform Engineering Assistant supports all common Kubernetes, Infrastructure-as-Code, and CI/CD systems, with native support for multi-cluster Kubernetes management and seamless integration with existing developer workflows.

Availability

The Nirmata AI Platform Engineering Assistant is now available to enterprise customers. Live demonstrations will be featured at KubeCon + CloudNativeCon North America 2025 and KyvernoCon.
To learn more or request a demo, visit nirmata.com.

About Nirmata

Nirmata is the creator of Kyverno, the CNCF policy engine for Kubernetes security and governance. With 2.5B+ downloads, Nirmata’s AI-powered policy-as-code solutions help enterprises automate Kubernetes compliance, prevent misconfigurations, and deliver enterprise Kubernetes management at scale across regulated industries. For more information, visit www.nirmata.com.