Alex Zeltcer, CEO and co-founder at nSure.ai, believes more companies are using AI and gen AI to create synthetic data that will be used to identify fraudulent groups who target online shoppers and gamers. He also observes social engineering at scale, perpetrated by machines, to conduct fraud.
In a solicitation for synthetic data generators, the U.S. federal government is looking for a machine that can generate fake data for real-world scenarios, such as identifying cybersecurity threats. Synthetic data can boost the accuracy of machine learning models or be used to test systems.
Machine learning systems are vulnerable to cyberattacks that could allow hackers to evade security and prompt data leaks, scientists at the National Institute of Standards and Technology warned. There is "no foolproof defense" against some of these attacks, researchers said.
There are many potential uses for generative AI at financial services firms, but few are more promising than those in the areas of risk and fraud, said Kristine Demareski, vice president of payments at Genpact, which is already harnessing AI to increase efficiencies in analysts' decision-making.
AI, machine learning and large language models are not new, but they are coming to fruition with the mass adoption of generative AI. For cybersecurity professionals, these are "exciting times we live in," said Dan Grosu, CTO and CISO at Information Security Media Group.
The National Institute of Standards and Technology is failing to provide adequate information about how it plans to award funding opportunities to research institutions and private organizations through a newly established Artificial Intelligence Safety Institute, according to a group of lawmakers.
Healthcare CISOs must recognize the real and imminent threat of AI-fueled cyberattacks and take proactive steps, including the deployment of AI-based security tools, to protect patient data and critical healthcare services, said Troy Hawes, managing director at consulting firm Moss Adams.
Marc Lueck, EMEA CISO at Zscaler, describes generative AI as the bridge between traditional AI and machine learning. He said it offers the ability to engage in humanlike conversations while tapping into vast data repositories and is both a powerful defense mechanism and a potential vulnerability.
Malware is being deployed throughout Europe at an alarming rate, and new threats are constantly on the rise. Join this webinar as we discuss and explore how the threat landscape has evolved in 2024 and what we expect to see in the year ahead.
In this OnDemand session, we'll also cover:
Threat trends and patterns...
Join us for an exclusive webinar with Unit 42’s Security Consulting Team. As the digital landscape evolves, so do the tactics of threat actors. Unit 42 responds to hundreds of incidents involving these threat actors and have counseled our clients on best practices for investigating, responding to, and remediating...
Banks and financial services continue to be targets of cyber attacks and need to protect access to sensitive data and critical business resources. They are struggling to manage digital identities and access rights across multiple systems and applications, leaving hidden outliers ripe for exploitation.
However,...
The U.S. National Institute of Standards and Technology is soliciting public guidance on implementation of an October White House executive order seeking safeguards for artificial intelligence. The order directed the agency to establish guidelines for developers of AI to conduct red-teaming tests.
The U.K.'s highest court on Wednesday affirmed that an artificial intelligence system cannot be granted ownership of patents. AI "is not a person, let alone a natural person and it did not devise any relevant invention," wrote Justice David Kitchin.
Automating decision-making in the security operations center strengthens an organization's ability to detect, respond to and mitigate security threats effectively. But the focus has shifted from micro-automation to a unified platform, according to Michael Lyborg, CISO of Swimlane.
OpenAI on Monday released a framework it says will help assess and protect against the "catastrophic risks" posed by the "increasingly powerful" AI models it develops. "We believe the scientific study of catastrophic risks from AI has fallen far short of where we need to be," the company said.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.eu, you agree to our use of cookies.