Forty-one percent of tech executives in a recent international survey said they believe advancements in AI will significantly increase security threats. NetApp's second annual Data Complexity Report points to 2025 as "AI's make or break year."
The School Library Systems Association of New York has created a free resource for PreK-12 educators on building student understanding of artificial intelligence.
"By 2027, AI assistants and AI-enhanced workflows incorporated into data integration tools will reduce manual intervention by 60 percent and enable self-service data management," according to Gartner.
In these days of rampant ransomware and other cybersecurity exploits, security is paramount to both proprietary and open source AI approaches — and here the open source movement might be susceptible to some inherent drawbacks, such as use of possibly insecure code from unknown sources.
Amazon Web Services (AWS) has unveiled Amazon Nova, a cutting-edge suite of foundation models (FMs) for generative AI.
A report from the Cloud Security Alliance highlights the need for AI audits that extend beyond regulatory compliance, and advocates for a risk-based, comprehensive methodology designed to foster trust in rapidly evolving intelligent systems.
Microsoft has introduced new and enhanced features for Microsoft 365 Copilot, including Copilot Actions, new AI "agents," and a Copilot Control System.
Common Sense Media, the nonprofit provider of entertainment and technology recommendations for families, and AI research and development company OpenAI have teamed up to create a free AI training course.
The annual virtual conference from the producers of Campus Technology and THE Journal will return on May 7, 2025, with a focus on AI, cybersecurity, and student success.
Stability AI, developer of open source models focused on text-to-image generation, has introduced Stable Diffusion 3.5, the latest version of its deep learning, text-to-image model.