Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.
This week's cryptohack roundup includes a U.S. federal judge striking down the SEC's expanded "Dealer Rule," a Python crypto library update stealing credentials, why digital payment apps are being excluded from some types of federal oversight, and drug cartels laundering profits via Tether.
A U.S. federal appeals court ruled U.S. Department of Treasury exceeded its authority by sanctioning Tornado Cash, a cryptocurrency mixing service used by North Korean hackers to launder more than $455 million. Smart contracts "are not capable of being owned," the court ruled.
Researchers have devised a technique to train artificial intelligence models to impersonate people's behavior based on just two hours of interviews, creating a virtual replica that can mimic an individual's values and preferences.
Google researchers used an AI-powered fuzzing tool to identify 26 vulnerabilities in open-source code repositories, some of which had been lurking undiscovered for several decades. Each was found with AI, using AI-generated and enhanced fuzz targets, Google said.
Chinese artificial intelligence research company DeepSeek, funded by quantitative trading firms, introduced what it says is one of the first reasoning models to rival OpenAI o1. Reasoning models engage in self-fact checking and perform multi-step reasoning tasks.
This week, sentences in FTX, Bitfinex and Helix cases, a $25.5M Thala hack, the WazirX hack and South Korea probed UpBit. U.S. lawmakers want a crackdown on Tornado. U.S. Prosecutors may scale back crypto cases. BIT Mining fined $10M and the Chinese Communist Party expelled a key blockchain figure
U.S. law enforcement arrested and indicted the founder of an artificial intelligence edtech startup AllHere over fraud charges. Federal prosecutors accused 33-year-old Joanna Smith-Griffin of defrauding investors, charging her with securities fraud, wire fraud and aggravated identity theft.
Robots controlled by large language models can be jailbroken "alarmingly" easily, found researchers who manipulated machines into detonating bombs. "Jailbreaking attacks are applicable and arguably, significantly more effective on AI-powered robots," researchers said.
A U.S. federal judge sentenced crypto hacker Ilya "Dutch" Lichtenstein to five years in prison for his involvement in the $3.6 billion Bitfinex hack and subsequent money laundering. The 35-year-old and his wife Heather Morgan pleaded guilty last year to one count of conspiracy to commit money laundering
This week, FTX sued to recover money, FTX's Caroline Ellison began her prison sentence, South Korea arrested hundreds in $232M scam, a guilty plea in a $73M pig-butchering case, BlueNoroff launched a new attack campaign, GodFather malware and WonderFi CEO kidnapped and released after ransom payment.
The tech industry is rushing out products to tamp down artificial intelligence models' propensity to lie faster than you can say "hallucinations." But many experts caution they haven't made generative AI ready for scalable, high-precision enterprise use.
Palantir, Anthropic and AWS are developing an AI platform for U.S. defense, using Claude models to enhance decision-making, detect trends and speed document processing. The Biden administration has promoted the adoption of AI for national security.
This week, Metawin hacks, LottieFiles attack, hackers used Ethereum smart contracts to target npm developers, Craig Wright faced contempt of court, Alameda sued KuCoin, Binance sought dismissal of a U.S. Securities and Exchange lawsuit, and Immutable received a Wells Notice.
Meta revised its policy to permit U.S. defense contractors and national security agencies to use its AI model, Llama, previously restricted from military applications, announcing that it has partnered with firms including Lockheed Martin and Palantir.
Google's "highly experimental" artificial intelligence agent Big Sleep has autonomously discovered an exploitable memory flaw in popular open-source database engine SQLite. The researchers detail how the AI agent discovered the now-patched vulnerability.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.eu, you agree to our use of cookies.