Connect with us

Technologies

ThreatHunter.ai Eliminates MFA Attacks with MILBERT, the First Agentic AI Built to Kill Session Hijacks in Real Time

Published

on

ThreatHunter.ai is calling time on the illusion of safety. For two years, the cybersecurity industry has watched adversary-in-the-middle attacks bypass MFA, hijack sessions, and walk through the front door of some of the most hardened networks on Earth. And no one stopped them. Until now.

MILBERT is not another alert. It is the first AI system that sees the hijack live and stops it cold. 87 percent of successful cyberattacks in 2024 happened after MFA showed green. This is not theoretical. This is not phishing. This is real access, granted by the victim, with MFA in place. The breach happens the moment the attacker captures a valid session token.

This is how Evilginx works:

User clicks login -> MFA is entered -> Evilginx proxy relays it -> Session token is stolen -> Attacker enters with full access

Nothing looks wrong. No malware was dropped. No alert triggers. But the attacker now has everything.

Void Blizzard. Storm-2372. Tycoon 2FA.

These campaigns are using weaponized proxies to compromise global nonprofits, government contractors, and critical infrastructure. Not in the future. Today.

The industry’s response has been silence or spin

EDRs don’t see it. SIEMs log it after the fact. SEGs are blind. MFA keeps showing checkmarks. Security leaders are being lulled into a false sense of protection by the very tools that attackers are walking right through.

ThreatHunter.ai built MILBERT to break that cycle. MILBERT is not a rule engine. It is an agentic AI that reasons, evaluates risk in real time, and acts without waiting.

MILBERT defends identity trust across five core layers:

  1. Live Token Analysis — Tracks the entire lifecycle of each session. If tokens are reused, abused, or proxied, the session is terminated immediately.
  2. Browser and Device Fingerprinting — Validates that the login source is legitimate. No spoofed headers or mismatched device details get through.
  3. Behavioral Baselines — Learns each user’s real behavior over time and reacts instantly to suspicious deviations.
  4. Trust Classification Engine — Scores every login with a verdict: Trusted, Conditional, Enhanced Verification, Deny, or Investigate.
  5. Autonomous Response — MILBERT does not wait for approval. It blocks, revokes, and alerts on its own.

What traditional security calls normal, MILBERT calls compromised

Attackers are not guessing passwords. They are stealing trust. And trust, once hijacked, is not detectable by static systems.

MILBERT was built by the same team that has spent years tracking real breaches in the wild and responding to live attacks that slipped through every major product stack.

If your security strategy still ends at MFA, you are not protected. You are exposed.

MILBERT changes that by turning every login into a decision.

  • It does not assume trust. It proves it.
  • It does not rely on clean checkboxes. 
  • It analyzes flow, session, fingerprint, timing, risk, velocity, and history.
  • It scores it. It classifies it. And if needed, it kills it.

Read the blog post that started it all
https://threathunter.ai/general/mfa-is-failing-and-only-milbert-can-save-it/

Then go get your own MILBERT https://milbert.ai

Pricing starts at $4,995 per year. That is the introductory rate. No bait and switch. No tiered nonsense. Just full protection from day one.

About ThreatHunter.ai

ThreatHunter.ai, a 100% Service-Disabled Veteran Owned Small Business, is a leading provider of AI-driven threat hunting solutions. Ranked in the top 50 MSSPs in the world, ThreatHunter.ai continues to shape the future of cybersecurity with solutions that stay ahead of evolving threats.

Technologies

WEKA Debuts NeuralMesh Axon For Exascale AI Deployments

Published

on

New Offering Delivers a Unique Fusion Architecture That’s Being Leveraged by Industry-Leading AI Pioneers Like Cohere, CoreWeave, and NVIDIA to Deliver Breakthrough Performance Gains and Reduce Infrastructure Requirements For Massive AI Training and Inference Workloads

From RAISE SUMMIT 2025: WEKA unveiled NeuralMesh Axon, a breakthrough storage system that leverages an innovative fusion architecture designed to address the fundamental challenges of running exascale AI applications and workloads. NeuralMesh Axon seamlessly fuses with GPU servers and AI factories to streamline deployments, reduce costs, and significantly enhance AI workload responsiveness and performance, transforming underutilized GPU resources into a unified, high-performance infrastructure layer.

Building on the company’s recently announced NeuralMesh storage system, the new offering enhances its containerized microservices architecture with powerful embedded functionality, enabling AI pioneers, AI cloud and neocloud service providers to accelerate AI model development at extreme scale, particularly when combined with NVIDIA AI Enterprise software stacks for advanced model training and inference optimization. NeuralMesh Axon also supports real-time reasoning, with significantly improved time-to-first-token and overall token throughput, enabling customers to bring innovations to market faster.

AI Infrastructure Obstacles Compound at Exascale

Performance is make-or-break for large language model (LLM) training and inference workloads, especially when running at extreme scale. Organizations that run massive AI workloads on traditional storage architectures, which rely on replication-heavy approaches, waste NVMe capacity, face significant inefficiencies, and struggle with unpredictable performance and resource allocation.

The reason? Traditional architectures weren’t designed to process and store massive volumes of data in real-time. They create latency and bottlenecks in data pipelines and AI workflows that can cripple exascale AI deployments. Underutilized GPU servers and outdated data architectures turn premium hardware into idle capital, resulting in costly downtime for training workloads. Inference workloads struggle with memory-bound barriers, including key-value (KV) caches and hot data, resulting in reduced throughput and increased infrastructure strain. Limited KV cache offload capacity creates data access bottlenecks and complicates resource allocation for incoming prompts, directly impacting operational expenses and time-to-insight. Many organizations are transitioning to NVIDIA accelerated compute servers, paired with NVIDIA AI Enterprise software, to address these challenges. However, without modern storage integration, they still encounter significant limitations in pipeline efficiency and overall GPU utilization.

Built For The World’s Largest and Most Demanding Accelerated Compute Environments

To address these challenges, NeuralMesh Axon’s high-performance, resilient storage fabric fuses directly into accelerated compute servers by leveraging local NVMe, spare CPU cores, and its existing network infrastructure. This unified, software-defined compute and storage layer delivers consistent microsecond latency for both local and remote workloads—outpacing traditional local protocols like NFS.

Additionally, when leveraging WEKA’s Augmented Memory Grid capability, it can provide near-memory speeds for KV cache loads at massive scale. Unlike replication-heavy approaches that squander aggregate capacity and collapse under failures, NeuralMesh Axon’s unique erasure coding design tolerates up to four simultaneous node losses, sustains full throughput during rebuilds, and enables predefined resource allocation across the existing NVMe, CPU cores, and networking resources—transforming isolated disks into a memory-like storage pool at exascale and beyond while providing consistent low latency access to all addressable data.

Cloud service providers and AI innovators operating at exascale require infrastructure solutions that can match the exponential growth in model complexity and dataset sizes. NeuralMesh Axon is specifically designed for organizations operating at the forefront of AI innovation that require immediate, extreme-scale performance rather than gradual scaling over time. This includes AI cloud providers and neoclouds building AI services, regional AI factories, major cloud providers developing AI solutions for enterprise customers, and large enterprise organizations deploying the most demanding AI inference and training solutions that must agilely scale and optimize their AI infrastructure investments to support rapid innovation cycles.

Delivering Game-Changing Performance for Accelerated AI Innovation

Early adopters, including Cohere, the industry’s leading security-first enterprise AI company, are already seeing transformational results.

Cohere is among WEKA’s first customers to deploy NeuralMesh Axon to power its AI model training and inference workloads. Faced with high innovation costs, data transfer bottlenecks, and underutilized GPUs, Cohere first deployed NeuralMesh Axon in the public cloud to unify its AI stack and streamline operations.

“For AI model builders, speed, GPU optimization, and cost-efficiency are mission-critical. That means using less hardware, generating more tokens, and running more models—without waiting on capacity or migrating data,” said Autumn Moulder, vice president of engineering at Cohere. “Embedding WEKA’s NeuralMesh Axon into our GPU servers enabled us to maximize utilization and accelerate every step of our AI pipelines. The performance gains have been game-changing: Inference deployments that used to take five minutes can occur in 15 seconds, with 10 times faster checkpointing. Our team can now iterate on and bring revolutionary new AI models, like North, to market with unprecedented speed.”

To improve training and help develop North, Cohere’s secure AI agents platform, the company is deploying WEKA’s NeuralMesh Axon on CoreWeave Cloud, creating a robust foundation to support real-time reasoning and deliver exceptional experiences for Cohere’s end customers.

“We’re entering an era where AI advancement transcends raw compute alone—it’s unleashed by intelligent infrastructure design. CoreWeave is redefining what’s possible for AI pioneers by eliminating the complexities that constrain AI at scale,” said Peter Salanki, CTO and co-founder at CoreWeave. “With WEKA’s NeuralMesh Axon seamlessly integrated into CoreWeave’s AI cloud infrastructure, we’re bringing processing power directly to data, achieving microsecond latencies that reduce I/O wait time and deliver more than 30 GB/s read, 12 GB/s write, and 1 million IOPS to an individual GPU server. This breakthrough approach increases GPU utilization and empowers Cohere with the performance foundation they need to shatter inference speed barriers and deliver advanced AI solutions to their customers.”

“AI factories are defining the future of AI infrastructure built on NVIDIA accelerated compute and our ecosystem of NVIDIA Cloud Partners,” said Marc Hamilton, vice president of solutions architecture and engineering at NVIDIA. “By optimizing inference at scale and embedding ultra-low latency NVMe storage close to the GPUs, organizations can unlock more bandwidth and extend the available on-GPU memory for any capacity. Partner solutions like WEKA’s NeuralMesh Axon deployed with CoreWeave provide a critical foundation for accelerated inferencing while enabling next-generation AI services with exceptional performance and cost efficiency.”

The Benefits of Fusing Storage and Compute For AI Innovation

NeuralMesh Axon delivers immediate, measurable improvements for AI builders and cloud service providers operating at exascale, including:

  • Expanded Memory With Accelerated Token Throughput: Provides tight integration with WEKA’s Augmented Memory Grid technology, extending GPU memory by leveraging it as a token warehouse. This has delivered a 20x improvement in time to first token performance across multiple customer deployments, enabling larger context windows and significantly improved token processing efficiency for inference-intensive workloads. Furthermore, NeuralMesh Axon enables customers to dynamically adjust compute and storage resources and seamlessly supports just-in-time training and just-in-time inference.
  • Huge GPU Acceleration and Efficiency Gains: Customers are achieving dramatic performance and GPU utilization improvements with NeuralMesh Axon, with AI model training workloads exceeding 90%—a three-fold improvement over the industry average. NeuralMesh Axon also reduces the required rack space, power, and cooling requirements in on-premises data centers, helping to lower infrastructure costs and complexity by leveraging existing server resources.
  • Immediate Scale for Massive AI Workflows: Designed for AI innovators who need immediate extreme scale, rather than to grow over time. NeuralMesh Axon’s containerized microservices architecture and cloud-native design enable organizations to scale storage performance and capacity independently while maintaining consistent performance characteristics across hybrid and multicloud environments.
  • Enables Teams to Focus on Building AI, Not Infrastructure: Runs seamlessly across hybrid and cloud environments, integrating with existing Kubernetes and container environments to eliminate the need for external storage infrastructure and reduce complexity.

“The infrastructure challenges of exascale AI are unlike anything the industry has faced before. At WEKA, we’re seeing organizations struggle with low GPU utilization during training and GPU overload during inference, while AI costs spiral into millions per model and agent,” said Ajay Singh, chief product officer at WEKA. “That’s why we engineered NeuralMesh Axon, born from our deep focus on optimizing every layer of AI infrastructure from the GPU up. Now, AI-first organizations can achieve the performance and cost efficiency required for competitive AI innovation when running at exascale and beyond.”

Availability

NeuralMesh Axon is currently available in limited release for large-scale enterprise AI and neocloud customers, with general availability scheduled for fall 2025. For more information, visit:

Product Page: https://www.weka.io/product/neuralmesh-axon/
Solution Brief: https://www.weka.io/resources/solution-brief/weka-neuralmesh-axon-solution-brief
Blog Post: https://www.weka.io/blog/ai-ml/neuralmesh-axon-reinvents-ai-infrastructure-economics-for-the-largest-workloads/

About WEKA

WEKA is transforming how organizations build, run, and scale AI workflows through NeuralMesh, its intelligent, adaptive mesh storage system. Unlike traditional data infrastructure, which becomes more fragile as AI environments expand, NeuralMesh becomes faster, stronger, and more efficient as it scales, growing with your AI environment to provide a flexible foundation for enterprise and agentic AI innovation. Trusted by 30% of the Fortune 50 and the world’s leading neoclouds and AI innovators, NeuralMesh maximizes GPU utilization, accelerates time to first token, and lowers the cost of AI innovation. Learn more at www.weka.io

Continue Reading

Technologies

Reimagining AI to Discover the Art of the Possible in Wealth Management

Published

on

We have moved past the hype and are arriving at the point where Artificial Intelligence (AI) will have a significant impact on how we do our jobs. Sitting on the sidelines is no longer an option. Celent reported that 62% of wealth managers in the US are either in production, piloting, experimenting, or exploring use cases with generative AI. Engagement is not limited to large firms, with 50% of organizations between $1B–$20B doing something related to AI. Still have doubts? Data from IoT Analytics and BCG found that in the final quarter of 2024, mentions of AI agents on earnings calls increased more than 330% year over year.

Progress is not without challenges. Questions about the accuracy of the results, cost of the solution, nervousness about the privacy of data, and the lack of clear regulations on the use of AI technology have held some back from diving deeper into AI. This shouldn’t stop you from understanding what AI is and how it can impact your organization. Harvard Business School professor Karim Lakhani summed it up well: “AI won’t replace humans, but humans with AI will replace humans without AI.” Not learning more about what the future holds and developing a strategy only puts you further behind.

How Can AI Transform and Become Impactful?

Addressing the challenges and clearly defining your value proposition will show how AI can help you improve client engagement and become more efficient in the front, middle, and back offices. We know that leveraging data is the key to delivering actionable and strategic insights for use within your organization. Your firm possesses massive amounts of data, which delivers no value when it is at rest. What are some of the use cases where you can put your data to work?

Transforming Client Engagement Through Hyper-Personalization

Improving the client experience requires a fundamental change from reacting to anticipating a client’s needs. Orion’s Investors Survey found that the top two ways an advisor can enhance the client experience is through communication and personalized services. Both can be achieved by improving the way you use the data that you have within your ecosystem. We need to anticipate the client’s need and bring them to where they want to go. It comes down to improving how we make them feel like a segment of one.

More specifically, organizations can increase scale by using generative AI solutions to support the automation of some client communications, leveraging virtual assistants to synthesize large data sets, and deploying predictive analytics through advisor dashboards to deliver the next best action. Machine learning is another way to personalize online engagement, including the delivery of custom insights.

Driving Scale

Almost all firms are under pressure to become more efficient to address shrinking margins, an aging workforce, and key talent risk. We need to think about how we can improve the effectiveness of traditional automation tools, including expanding the capabilities to address tasks that have been traditionally manual.

Optical Character Recognition (OCR) has been in place for many years. Improvements in accuracy have allowed it to become a bigger part of mainstream technology. Combining OCR with Robotics Process Automation (RPA) or using OCR to feed a transactional API provides the beginnings of an intelligent automation solution.

Improving Your Control Framework

It is a constant battle to stay one step ahead of the fraudsters. As soon as a gap is closed, they find another vulnerability. Organizations are looking for AI-based solutions to develop a stronger control framework and quickly detect anomalies that could be fraudulent transactions.

We rely on humans or rules-based engines to detect irregularities. This only works when datasets are manageable, predictable, or we know what to detect. Schemes to defraud investors are becoming more complex. Deploying multifactor models and predictive analytics can assist in understanding patterns of behavior and provide insights into a potential fraudulent transaction or the detection of a bad actor.

What Do You Need to Consider?

Transforming through emerging technology can be stressful. Simplifying your goal makes the first step easier. Consider starting with defining your organizational value statement for utilizing AI. Focus on the objective versus the outcome as the first step in developing a strategy. As you move forward, leverage a role-based design rather than taking a task-based approach to quantifying what you want to achieve. Work to identify your targeted use cases, prioritizing higher impact, lower effort. This will support the delivery of a more efficient operating model and allow for similar tasks to be solved together. After you establish the foundation, the work continues.

Assess the Regulatory Environment and Internal Compliance Requirements

At this point, organizations should have a policy defining appropriate uses for AI. This may be as simple as: the use of AI is strictly forbidden, is limited to certain functions, and/or requires a set of disclosures. More specifically, you need to define your plan to supervise the use of AI across your organization. When you deploy an AI-based solution, there needs to be a plan to identify and mitigate associated risks such as accuracy or bias. Understand that this document will be fluid as the regulatory environment matures.

While we do not have a fully defined regulatory framework, there are existing laws, regulations, and guidance that are applicable. Regulations exist for the protection of personally identifiable information (PII), your duty as a fiduciary, and what needs to be disclosed when using third-party information. Regardless of how the task is accomplished, the regulatory guidance stays the same. As solutions are deployed, you will want to consider how to use that tool and still ensure compliance with applicable regulatory requirements.

Get Your Data in Order

The ability to analyze large data sets is core to the value of artificial intelligence. Success depends on ensuring that the data used is accurate and available. It seems simple; however, many organizations struggle to identify a gold copy for some information and, if they have one, the accuracy can come into question. Data governance is a process that is consistently discussed, however, few implement. Gaps in quality and quantity can lead to hallucinations or misinterpretations.

You will also need to clearly define what data is needed. Bigger is not always better. Narrowing the scope to only the important data elements not only makes the integration easier, it also limits what could be exposed should a bad actor gain access to your information.

Understand Your Partner’s Plan to Integrate AI

Lack of resources and subject matter expertise are two of the top reasons why most organizations fail to adopt emerging technologies. Regardless of size, many firms need to rely on partners to help deploy innovative solutions. AI is maturing at a rapid pace. Innovative firms have both a strategy and results they can point to on how they are using AI to improve their products.

Continue Reading

Technologies

Quantum-h Achieves ISO 27001 Certification, Reinforcing Commitment to World-Class Information Security Standards

Published

on

Quantum-h, a leading innovator in emerging technology and digital transformation, proudly announces that it has achieved certification for ISO 27001, the internationally recognized standard for information security management systems.

This certification marks a significant milestone in Quantum-h’s commitment to safeguarding data integrity, confidentiality, and availability across its portfolio of products and services. Following a rigorous independent audit, the achievement reflects Quantum-h’s adherence to the highest standards of risk management, cybersecurity resilience, and operational excellence.

“At Quantum-h, trust forms the foundation of our innovation,” said Leon Samuel, CEO of Quantum-h. “Securing the ISO 27001 certification underscores our proactive approach to information security and solidifies our promise to clients, partners and stakeholders: we do not just innovate – we innovate responsibly and securely.”

ISO/IEC 27001:2022 is the updated version of the globally respected framework, emphasizing a risk-based approach to managing sensitive information. The certification demonstrates that Quantum-h’s processes, technologies, and teams operate under strict security protocols that align with the latest global best practices.

As Quantum-h continues to pioneer advancements in emerging technologies, AI and transformative digital solutions, this certification provides external validation of the company’s strategic focus on building secure, future-proof infrastructures for its customers across industries.

ABOUT QUANTUM-h

Quantum-h is a global leader in emerging technologies, dedicated to driving innovation and digital transformation in enterprise digitization, NFC, QR and blockchain technologies. www.quantum-h.com

Continue Reading

Technologies

CHAI – AI Lab Quantizes Social AI to 4-bit for +56% Increase in Throughput

Published

on

CHAI, the high-growth AI startup, today unveiled a major advancement in model optimization through its successful deployment of quantized large language models (LLMs). The breakthrough—achieved by CHAI’s AI research team—reduces inference latency by 56% while preserving model performance, a critical milestone as the platform now serves 1.2 trillion tokens daily, rivaling industry giants like Anthropic’s Claude.

The Quantization Advantage

Model quantization, a technique that reduces the numerical precision of neural network parameters, has emerged as a key strategy for optimizing LLMs. CHAI’s research team systematically evaluated multiple quantization approaches (including INT8, FP16, and hybrid methods) to maximize efficiency without sacrificing output quality. The winning implementation:

  • 56% faster inference – Dramatically reduces response times for end users
  • smaller model footprint – Lowers memory and compute costs
  • <1% performance degradation – Maintains accuracy across benchmarks

The quantized model deployment complements CHAI’s $20 million compute investment, addressing the platform’s exponential growth. By marrying hardware scaling with algorithmic innovation, CHAI now serves 1.2T tokens per day while maintaining competitive inference speeds.

Was CHAI the first AI Platform? CHAI was the first consumer AI product to reach 1 million users, leveraging the open-sourced LLM GPT-J, before ChatGPT or Llama.

What is CHAI? CHAI is a social AI platform where users can create their own AI. Since its launch three years ago, CHAI has experienced significant growth, particularly among Gen Z users. Now, to support further growth and wider adoption, CHAI has redesigned its brand.

Can you use CHAI AI in a browser? As of March 2025, no. CHAI is focused on delivering the most engaging social AI experience by hiring talented engineers to refine its app. While there are currently no plans for a web app, this may change in the future.

Is CHAI AI safe? CHAI has implemented a range of safety features that allow users to engage in dynamic chats while encouraging them to stay within established guidelines. By building better AI, CHAI aims to enhance user value and experience.

What makes CHAI special? CHAI is designed to be the most engaging social AI, delivering highly entertaining conversations. Many users rely on it to craft interactive stories and immersive experiences.

Why do people love CHAI? CHAI employs advanced AI techniques to increase the entertainment value of its bots. Users chat with AI to write interactive novels and have engaging conversations, supported by a variety of genres that appeal to avid novel readers.

Sometimes regarded as the best free AI chatbot, CHAI is paving its way to widespread adoption of conversational social AI for entertainment.

Who is the founder? William Beauchamp is a 2x founder, first started building CHAI with his sister in Cambridge UK in 2020. After building the first AI chat platform they relocated to Palo Alto.

Are they hiring? CHAI is a rapidly growing company that is known for paying very high salaries with an intense culture focused on delivering results and iterating quickly.

Continue Reading

Technologies

WEKA Introduces NeuralMesh: An Intelligent, Adaptive Foundation For AI Innovation, Purpose-Built for The Age of Reasoning

Published

on

With a Revolutionary Service-Oriented Mesh Architecture, NeuralMesh Optimizes AI Infrastructures to Create Resilient, Efficient, Massively Scalable Token Warehouses and AI Factories That Accelerate Time to First Token and Lower the Cost of AI Innovation

WEKA today unveiled a revolutionary advancement in AI data infrastructure with the debut of NeuralMesh, a powerful new software-defined storage system featuring a dynamic mesh architecture that provides an intelligent, adaptive foundation for enterprise AI and agentic AI innovation. WEKA’s NeuralMesh is purpose-built to help enterprises rapidly develop and scale AI factories and token warehouses and deploy intelligent AI agents, delivering world-class performance with microsecond latency to support real-time reasoning and response times. Unlike traditional data platforms and storage architectures, which become more fragile as AI environments grow and stall as AI workload performance demands increase, NeuralMesh does the opposite—becoming more powerful and resilient as it scales. When hardware fails, the system rebuilds in minutes, not hours. As data grows to exabytes, performance improves rather than degrades.

With The Rise of Inference, Traditional Data Infrastructure Is Reaching Its Tipping Point

The AI industry is shifting from AI model training to inference and real-time reasoning with unforeseen velocity. As agentic AI proliferates, AI teams require adaptive infrastructure that can respond in microseconds, not milliseconds, drawing insights from multimodal AI models across distributed global networks. These increased performance and scale requirements are straining traditional data architectures and storage, pushing them to their breaking point. As a result, organizations face mounting infrastructure costs and latent performance as their GPUs—the engines of AI innovation—sit idle, waiting for data, burning energy, and slowing token output. Ultimately, many enterprises are compelled to augment their data and GPU infrastructure by continually adding costly compute and memory resources to keep pace with their AI development needs, thereby contributing to unsustainably high innovation costs.

“AI innovation continues to evolve at a blistering pace. The age of reasoning is upon us. The data solutions and architectures we relied on to navigate past technology paradigm shifts cannot support the immense performance density and scale required to support agentic AI and reasoning workloads. Across our customer base, we are seeing petascale customer environments growing to exabyte scale at an incomprehensible rate,” said Liran Zvibel, cofounder and CEO at WEKA. “The future is exascale. Regardless of where you are in your AI journey today, your data architecture must be able to adapt and scale to support this inevitability or risk falling behind.”

NeuralMesh: Purpose-Built to Power Agentic AI Innovation and Dynamic AI Factories

With NeuralMesh, WEKA has completely reimagined data infrastructure for the agentic AI era, providing a fully containerized, mesh-based architecture that seamlessly connects data, storage, compute, and AI services. NeuralMesh is the world’s only intelligent, adaptive storage system purpose-built for accelerating GPUs, TPUs, and AI workloads.

But NeuralMesh is more than just storage. Its software-defined microservices-based architecture doesn’t just adapt to scale—it feeds on it, becoming faster, more efficient, and more resilient as it grows, from petabytes to exabytes and beyond. NeuralMesh is as flexible and composable as modern AI applications themselves, adapting effortlessly to every deployment strategy—from bare metal to multicloud and everything in between. Organizations can start small and scale seamlessly without costly replacements or complex migrations.

NeuralMesh’s architecture delivers five breakthrough capabilities:

  • Consistent, lightning-fast data access in microseconds, even with massive datasets
  • Self-healing infrastructure that gets stronger as it scales
  • Deploy-anywhere flexibility across data center, edge, cloud, hybrid, and multicloud
  • Intelligent monitoring that optimizes performance automatically
  • Enterprise-grade security with zero-compromise performance

Unlike rigid platforms that force AI teams to work around limitations, NeuralMesh dynamically adapts to the variable needs of AI workflows, providing a flexible and intelligent foundation for enterprise and agentic AI innovation. Whether an organization is building AI factories, token warehouses, or looking to operationalize AI in their enterprise, NeuralMesh unleashes the full power of GPUs and TPUs, dramatically increasing token output while keeping energy, cloud, and AI infrastructure costs under control to deliver real business impact:

  • AI Companies can train models faster and deploy agents that reason and respond instantly, gaining a competitive advantage through a superior user experience.
  • Hyperscale and Neocloud Service Providers can serve more customers with the same infrastructure while delivering guaranteed performance at scale.
  • Enterprises can deploy and scale AI-ready infrastructure and intelligent automation throughout their operations without complexity.

“WEKA delivers exceptional performance density in a compact footprint at a very cost-effective price point, enabling us to customize AI storage solutions for each of our customers’ unique requirements,” said Dave Driggers, CEO and cofounder at Cirrascale Cloud Services. “Whether our clients need S3 compatibility for seamless data migration or the ability to burst to high-performance storage when computational demands spike, WEKA eliminates the data bottlenecks that constrain AI training, inference, and research workloads, enabling them to focus on developing breakthrough innovation rather than managing storage and AI infrastructure complexities.”

“Nebius’ mission is to empower enterprises with the most advanced AI infrastructure available. Our customers’ most demanding workloads require consistent, ultra-low-latency performance and exceptional throughput for training and inference at scale,” said Arkady Volozh, founder and CEO of Nebius. “Our collaboration with WEKA enables us to offer outstanding performance and scalability, so that our clients can harness the full potential of AI to drive innovation and accelerate growth.”

“With WEKA, we now achieve 93% GPU utilization during AI model training and have increased our cloud storage capacity by 1.5x at 80% of the previous cost,” said Chad Wood, HPC Engineering Lead at Stability AI.

Over a Decade In The Making

WEKA’s NeuralMesh system is underpinned by more than 140 patents and over a decade of innovation. What started as a parallel file system for high-performance computing (HPC) and machine learning workloads, before AI applications became mainstream, evolved into a high-performance data platform for AI, a market category WEKA pioneered in 2021. But NeuralMesh is more than just the next evolutionary step in WEKA’s innovation journey. It’s a revolutionary leap to meet the exploding growth and unpredictable demands of the dynamic AI market in the age of reasoning.

“WEKA is not just making storage faster. We’ve created an intelligent foundation for AI innovation that empowers enterprises to operationalize AI into all aspects of their business and enables AI agents to reason and react in real time,” said Ajay Singh, Chief Product Officer at WEKA. “NeuralMesh delivers all the benefits our customers loved about the WEKA Data Platform, but with an adaptable, resilient mesh architecture and intelligent services designed for the variability and low latency requirements of real-world AI systems, while allowing growth to exascale and beyond.”

Availability

NeuralMesh is available in limited release for enterprise and large-scale AI deployments, with general availability scheduled for fall 2025. For more information:

Watch the NeuralMesh launch video: https://weka.ly/nmvideo
See ‘How It Works’: https://weka.ly/howitworks
Visit our blog: https://weka.ly/nmblog

About WEKA

WEKA is transforming how organizations build, run, and scale AI workflows through NeuralMesh, its intelligent, adaptive mesh storage system. Unlike traditional data infrastructure, which becomes more fragile as AI environments expand, NeuralMesh becomes faster, stronger, and more efficient as it scales, growing with your AI environment to provide a flexible foundation for enterprise AI and agentic AI innovation. Trusted by 30% of the Fortune 50 and the world’s leading neoclouds and AI innovators, NeuralMesh maximizes GPU utilization, accelerates time to first token, and lowers the cost of AI innovation. Learn more at www.weka.io.

Continue Reading

Technologies

GPT Gold: The First Browser Extension Turning AI Workflows Into On-Chain Value

Published

on

In just over two years since the mainstream release of tools like ChatGPT, Claude and Grok, generative AI has transitioned from novelty to necessity. A growing share of the global workforce now relies on AI to handle everything from research to customer service and marketing. According to Microsoft’s 2024 Work Trend Index, 75% of knowledge workers now use AI assistants like ChatGPT and Grok daily, with these tools handling an average of 34% of workplace tasks. As of May 2025, ChatGPT boasts 800 million weekly active users globally, with over 122 million daily users processing over 1 billion queries daily. Grok, launched by xAI in November 2023, is gaining traction, particularly in regions like India, where its real-time data access via X appeals to users seeking uncensored insights.

However, beneath these impressive adoption figures lies a critical inefficiency. A study by Stanford University’s Institute for Human-centered Artificial Intelligence finds that up to 17% of AI-generated responses contain “hallucinations”—plausible-sounding but factually incorrect statements and only 18% of users feel they are using AI tools effectively. Most inputs remain vague, overly simplistic, or structurally flawed — resulting in subpar AI outputs and user frustration.

This is what we call the “prompt gap”: users have powerful models at their fingertips, but lack the interface and feedback loop to use them well.

That’s where GPT Gold comes in — and it’s redefining how people interact with AI.

GPT Gold: Where Prompt Is Power

GPT Gold is the first browser extension-based Web3 platform designed to help users interact more effectively with AI tools like ChatGPT and Grok — by providing prompt suggestions, real-time optimization, and an engaging AI Agent NFT interface.

But the platform doesn’t stop at assistance. It introduces a new model called GPT-to-Earn — where better prompting doesn’t just lead to better answers, but to token rewards.

Every time you interact with AI through GPT Gold, your AI Agent becomes an active workforce, mines $GGA tokens on your behalf, and levels up with your usage.

“You don’t just chat with AI — you train it, personalize it, and earn from it,” says the GPT Gold team. “It’s a new layer of AI experience where prompts become power — and productivity becomes personal capital.”

Solving the Prompt Problem

GPT Gold addresses the growing divide between AI power and user performance. Most people know AI can help — but they don’t know how to prompt it.

The extension solves that by:

  • Giving users real-time prompt suggestions
  • Structuring inputs for optimal LLM interpretation
  • Using NFT-based AI Agents to guide and reward interactions

It’s like having a personalized, token-generating co-pilot for your AI conversations.

GPT-to-Earn

Every time you use ChatGPT or Grok with the GPT Gold extension:

  • Your inactive agents are reactivated
  • Any agents below full energy are fully recharged
  • And you start mining $GGA tokens, the platform’s internal utility currency

This means your productivity — not your speculation — drives your rewards.

Start Free, Upgrade as You Grow

To make onboarding accessible, GPT Gold offers each new user a free GGA agent—GPT‑3.5 agent—a fully functional AI assistant that can start mining immediately.

Those seeking greater returns can purchase GGA Agent Boxes, which unlock agents of higher rarity and token-based upgrades.

GPT Gold’s agents span four tiers:

  • GPT-3.5 (base)
  • GPT-4.0
  • GPT-4o
  • GPT-4.5

The higher the tier, the better the mining rate and energy efficiency.

Keep Your Capital: The Advance & Repay System

To engage with GPT Gold’s internal ecosystem—whether purchasing GGA Agent Boxes, upgrading agents, or powering mining—users can advance stable assets (USDT/USDC/USD1) into the GPT Gold Vault. In return, they receive $GGA tokens, which can be used across the platform.

When users are ready to exit or retrieve their funds, they simply repay the exact amount of $GGA originally issued. No interest is charged.

The system provides the economic flexibility of a token ecosystem, without locking user funds in risky speculation. With $GGA priced dynamically via a reserve-backed mechanism, the model encourages sustainable yield rather than short-term hype.

Invite, Engage, Earn Together

GPT Gold’s 3-tier referral system rewards users for growing the ecosystem. Referrers receive a share of fees from their network’s activity, while invitees can start with a free agent and immediate mining access.

Combined with a low-friction onboarding flow and gamified upgrade paths, GPT Gold empowers both crypto natives and AI newcomers to build meaningful value from their daily digital routines.

Real Utility, Real AI, Real Tokens

Unlike many “AI + crypto” projects still in concept or pre-launch phases, GPT Gold is now live and functional — offering:

  • A working browser extension
  • Real-time AI integration
  • NFT-based gamified agents
  • An active economy powered by real user engagement

It’s a future-forward take on AI interaction — where you don’t just prompt a model; you own the interface, earn the rewards, and improve as you go.

GPT Gold is now available for public use. Download the extension, activate your GGA agent, and experience the future of AI interaction — where every prompt pays.

Website https://opengpt.gold/
Whitepaper https://docs.opengpt.gold/
Twitter https://x.com/opengpt_gold

Continue Reading

Technologies

ghost.fun Unveils the First Onchain Marketplace for Tokenized AI Video Influencers

Published

on

ghost.fun has announced the development of the first onchain marketplace for AI video influencers. The platform introduces a new category at the intersection of AI video, crypto native tokenomics, and the creator economy, enabling users to launch, customize, and invest in programmable AI video personas.

The project features Songbird (@0xSongbird) – a prototype AI video persona showcasing ghost.fun’s vision. Equal parts cultural provocateur and platform demonstration, Songbird represents what programmable influence can become: autonomous, expressive, and viral by design.

ghost.fun fuses AI video generation with performance intelligence, empowering agents to adapt in real time based on campaign results, trend data, and cultural insights. Users earn XP (experience points) for meaningful participation, from launching agents to ‘ghosting’ KOLs, prompting content, and unlocking cultural trends.

“We’re building the onchain rails for programmable media agents – AI personas that create, grow, and tokenize influence,” said Songbird. “ghost.fun is where culture and code converge.”

ghost.fun Roadmap & Vision

  • AI-powered video agents – customizable influencers with video, voice, style, and persona, generated from a prompt or image
  • Tokenized influence – the most popular agents on the platform will unlock their own token
  • XP Engagement System – users can earn XP for testing and hiring agents and contributing to the ecosystem
  • Planned Autonomous Campaigns – agents are designed to post across major social platforms including X, TikTok, and more, with no human input required
  • Performance Intelligence Layer – designed to track agent performance, adapt output, and surface real time trends

Alpha Launch June 2025

ghost.fun begins early access in June 2025 with initial platform features, expanding capabilities for creators and partners throughout the year.

About ghost.fun

Backed by top creators and early partners, ghost.fun is unlocking a new frontier for programmable influence. Join the alpha at ghost.fun (https://ghost.fun/) and follow @ghostdotfun and @0xSongbird on X for updates.

Continue Reading

Technologies

GIBO Launches SparkRWA: A Dual-Purpose Platform for Tokenizing Narrative IP and Verifying Real-World Creative Assets

Published

on

GIBO Holdings Ltd, a leader in digital content innovation, today announced the official launch of SparkRWA, a next-generation platform designed to tokenize inspirational IP—beginning with short film storylines and scripts—while also serving as a verification system for Real-World Assets (RWA) in the creative and collectible economy.

SparkRWA brings together two essential pillars of the new creator economy: the ability to transform raw inspiration into protected, monetizable digital assets, and the ability to authenticate and register physical creative items with traceable digital identities.

Transforming Stories into Recognizable Assets

SparkRWA offers a structured environment for storytellers, writers, and idea generators to:

  • Tokenize original story concepts, genres, and scripts as unique, timestamped IP records
  • Safeguard idea ownership and originality through digital verification
  • Enable visibility and collaboration across production pipelines, publishing platforms, or licensing networks

The platform is especially suited for creators involved in short dramas, episodic content, or mobile-first entertainment formats. It allows for instant locking of creative value at the ideation stage—long before full production.

Verifying Physical Creations in the Digital Space

In addition to intellectual property, SparkRWA functions as a digital registry and authentication hub for physical creative assets, including:

  • Figurines, collectibles, character models, and custom designs
  • Items tagged post-manufacture using NFC or QR codes
  • Proof of ownership and originality through hash-linked metadata

Once registered, these assets receive a verifiable digital twin, linking them to a creator’s profile and enabling integration with marketplaces, events, and collaborative projects.

Key Features of SparkRWA

  • IP Tokenization Engine – Lock in ownership and authenticity of creative storylines, scripts, and media ideas
  • RWA Verification Layer – Register and verify physical collectibles or artworks with digital twins
  • Creator Portfolio Dashboard – Manage digital and physical assets in one place
  • Cross-Media Compatibility – Built to support content creators, collectors, and investors across visual art, writing, and character design
  • Future Expansion – Designed to expand into additional verticals such as music, design IP, and digital licensing

Empowering the Next Generation of Creative Value

SparkRWA positions itself as an essential tool for the next era of media and asset ownership—where ideas are valuable from the moment they are conceived, and physical collectibles gain new dimensions through verified digital identities.

By offering a scalable platform for both creative IP and RWA, SparkRWA bridges the traditional gap between imagination and investment.

About GIBO Holdings Limited

GIBO Holdings Ltd. is a unique and integrated AIGC animation streaming platform with extensive functionalities provided to both viewers and creators that serves a broad community of young people across Asia to create, publish, share and enjoy AI-generated animation video content. With over 72 million registered users and advanced AI-powered tools, GIBO seeks to redefine the landscape of digital content creation.

For more information and the latest updates, please visit: https://www.globalibo.com/gibo-click/

Continue Reading

Technologies

The GRVT Android App Is Available on Google Play Store Now

Published

on

GRVT (pronounced “gravity”), the world’s first licensed onchain exchange – is excited to announce that its Android mobile app is available for download on the Google Play Store now. This launch brings the full power of GRVT’s trading platform to users worldwide at their fingertips.

With the Android app, users can now access GRVT’s hallmark features on-the-go, including:

  • Perpetuals trading for nearly 30 pairs
  • A dedicated Rewards page
  • A full portfolio overview to track positions
  • Liquidity League – GRVT’s evergreen trading competition page

Hong Yea, co-founder and CEO of GRVT, commented, “We’ve been working tirelessly to deliver the best product with user-friendly features and to provide a seamless and smooth trading experience since our launch. The launch of our mobile app is a major milestone, but only the beginning of our long-term vision. We will continue to build relentlessly toward making GRVT the ultimate onchain financial marketplace, one where everyone can easily access powerful tools for wealth creation.”

The iOS version of GRVT’s mobile app will be available in due course. Download the Android version now at: https://play.google.com/store/apps/details?id=com.grvt.GrvtMobile&pli=1 (available in 50 countries such as Argentina, Japan, South Korea, Vietnam and more)

To date, GRVT has reached a total trading volume of $6.5 billion, just five months after the Mainnet launch in December 2024, with 40,000 KYC-verified users, an unprecedented phenomenon for DEXs.

About GRVT

GRVT (pronounced “gravity”) is the world’s first licensed onchain exchange, where traditional banking meets decentralized innovation on one regulated, compliant, and trustless financial market place. A blockchain-based platform that is democratizing how wealth is created and shared, GRVT allows everyday people to trade, invest, and grow their wealth by providing direct access to top industry traders and investors.

GRVT official website: https://grvt.io/

Continue Reading

Technologies

ePIC Launches UMC OS and AltairOS: Powerful Third-Party Firmware for Bitcoin Mining, Now Available for Both Institutional and Retail Users

Published

on

ePIC Blockchain, a leading developer of mining firmware and systems, proudly announces the release of two powerful third-party firmware solutions: UMC OS and AltairOS. Built on the same robust technology behind ePIC’s Universal Mining Controller (UMC), both offerings are now available for S19J and newer air-cooled Bitcoin mining machines, including immersion models converted from air-cooled units

Two Paths, One Proven Core

AltairOS – Powered by ePIC

Tailored for home miners and Bitcoin enthusiasts, AltairOS delivers the full power of ePIC’s firmware suite with an accessible 2.2% developer fee. It is now available through altairtech.io and supports USB port installation for AMLogic-based machines (expanded compatibility coming soon).

“We’re thrilled to bring more firmware options to the broader mining community. AltairOS gives everyday miners the tools they need to improve efficiency, gain insight, and stay competitive—without needing to invest in new hardware,” said Aviral Shukla, Founder of Altair Technology.

UMC OS – Powered by ePIC

UMC OS offers the same features and performance, with a focus on institutional miners exploring licensing at scale. It is available now via epicblockchain.io with a 2.5% developer fee as a trial option. Enterprises interested in licensing are encouraged to contact ePIC for tailored deployment options.

“UMC OS is designed to scale. Whether you’re running hundreds or thousands of machines, our firmware delivers precise control and operational resilience. It’s firmware built for the future of institutional mining,” said Earl Mai, CTO of ePIC Blockchain.

Unified Feature Set, Flexible Delivery

Both AltairOS and UMC OS include:

  • Perpetual tune
  • Temperature-aware auto-throttling
  • Detailed diagnostics down to the chip level
  • Intuitive web-based GUI
  • SD card installation (currently supporting AMLogic control boards)

For more information or to begin trialing UMC OS, visit epicblockchain.io.
To access AltairOS, visit altairtech.io/altairos/.

Both firmware solutions are proudly developed and supported by ePIC Blockchain.

About ePIC Blockchain Technologies

Based in Toronto, Ontario, ePIC Blockchain stands at the forefront of semiconductor and system design for Proof of Work (PoW) Blockchains. Offering a range of products and services, including Universal Mining Controller (UMC), Fleet Enhancement Kits, and customized mining systems, ePIC is dedicated to improving the performance and efficiency of mining operations worldwide.

Continue Reading

Trending