#Tech4Biz – Devstyler.io https://devstyler.io News for developers from tech to lifestyle Mon, 12 May 2025 19:59:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Google to Pay $1.375 Billion to Settle Texas Privacy Lawsuits https://devstyler.io/blog/2025/05/12/google-to-pay-1-375-billion-to-settle-texas-privacy-lawsuits/ Mon, 12 May 2025 19:59:29 +0000 https://devstyler.io/?p=129388 ...]]> Settlement resolves claims over unauthorized tracking of user location, searches, and biometric data by the tech giant

Google has agreed to pay $1.375 billion to settle two major privacy lawsuits filed by the state of Texas, marking one of the largest legal recoveries from the tech giant in a state-level data privacy case.

Allegations of Unlawful Data Collection

The lawsuits, originally filed in 2022 by Texas Attorney General Ken Paxton, accused Google of illegally collecting and storing users’ personal data, including: location, tracking private “incognito” mode searches, and capturing voice and facial recognition data—all without proper user consent.

“For years, Google secretly tracked people’s movements, private searches, and even their voiceprints and facial geometry through their products and services,”

Paxton said in a statement.

“In Texas, Big Tech is not above the law. I fought back and won.”

Paxton’s office called the resolution the largest recovery nationwide by any state attorney general for enforcement of privacy laws against Google.

No Admission of Wrongdoing

Google is settling the lawsuits without admitting any wrongdoing or liability and without making changes to its current products or services. In a statement, spokesperson José Castañeda said the cases involved outdated policies.

“This settles a raft of old claims, many of which have already been resolved elsewhere, concerning product policies we have long since changed,”

Castañeda said.

“We are pleased to put them behind us, and we will continue to build robust privacy controls into our services.”

Legal Context and Background

The litigation is part of a broader wave of privacy and antitrust scrutiny against Big Tech. The Texas case echoes a similar lawsuit settled last year by Meta—Facebook’s parent company—for its use of facial recognition technology, also led by Paxton’s office.

Google has previously pushed back against the Texas suits. In one instance, an appeals court found the company lacked sufficient ties to Texas to be sued there. Google also argued that its products had been mischaracterized, stating, for example, that Google Photos only scans users’ faces to group similar images and does not use that data for advertising.

The settlement also follows a string of high-profile antitrust rulings against Google. U.S. courts recently found that the company had illegally maintained monopolies in web search and advertising technology, with remedies under consideration—including a possible divestment of its Chrome browser business. Google has said it intends to appeal both decisions.

Image: Freepik

]]>
Trump Fires U.S. Copyright Chief Amid AI Copyright Clash Involving Musk https://devstyler.io/blog/2025/05/12/trump-fires-u-s-copyright-chief-amid-ai-copyright-clash-involving-musk/ Mon, 12 May 2025 19:08:01 +0000 https://devstyler.io/?p=129365 ...]]> The sudden dismissal of Shira Perlmutter raises concerns about political interference in copyright policy as tensions mount over AI’s use of protected content

President Donald Trump has dismissed Shira Perlmutter, the Register of Copyrights and head of the U.S. Copyright Office, in a move that is drawing sharp criticism from lawmakers and legal observers. The firing, first reported by CBS News and Politico, was effectively confirmed through a statement by Representative Joe Morelle, the top Democrat on the House Administration Committee.

A Sudden Dismissal with AI Implications

“This is a brazen, unprecedented power grab with no legal basis,”

Morelle said in response to the firing. He suggested that the decision came directly in retaliation to Perlmutter’s refusal to endorse Elon Musk’s attempts to use copyrighted material to train artificial intelligence models.

Perlmutter, who assumed the position in 2020 during Trump’s first term, was appointed by Carla Hayden, the Librarian of Congress, who was also reportedly dismissed by Trump earlier this week.

The news comes amid heightened tensions surrounding the use of copyrighted content to train generative AI systems—an issue at the heart of a growing number of legal and policy battles.

The Copyright Office’s Position on AI and Fair Use

The firing coincided with the pre-release of Part III of a comprehensive report from the U.S. Copyright Office examining the intersection of copyright and artificial intelligence. While the report does not explicitly reference Elon Musk or his companies, it offers detailed commentary on the legal boundaries of training AI systems using copyrighted works.

According to the report, although certain uses such as research and analysis may fall under the legal doctrine of fair use, “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets” likely falls outside of fair use protections—especially if the content was accessed illegally.

The Office ultimately concludes that government regulation would be “premature” at this stage but supports the continued development of licensing markets. It also recommends considering “alternative approaches such as extended collective licensing” to address potential market failures.

Musk’s Role and Broader Tech Tensions

The situation is further complicated by the involvement of Elon Musk, a longtime Trump ally and a co-founder of both OpenAI and rival startup xAI, which is being integrated into X (formerly Twitter). Musk has voiced support for Jack Dorsey’s provocative call to “delete all IP law,” a sentiment that has stirred controversy in tech and legal circles alike.

While Musk has not publicly commented on Perlmutter’s firing, Trump acknowledged the story by “ReTruthing” a post from conservative attorney Mike Davis, who ironically appeared to disapprove of the move:

“Now tech bros are going to attempt to steal creators’ copyrights for AI profits.”

Meanwhile, OpenAI and other AI companies are currently facing multiple lawsuits alleging copyright infringement. The company has publicly advocated for legislative clarity that would allow AI developers to operate under expanded fair use protections—a move likely to face resistance from artists, authors, and lawmakers concerned about creative rights.

Looking Ahead

As generative AI continues to reshape the media, tech, and legal landscapes, the abrupt removal of the nation’s top copyright official underscores the growing political and economic stakes. With Perlmutter’s departure and the Copyright Office’s critical report now in circulation, the question of how U.S. law should adapt to the AI era is more urgent than ever.

Image Credit: 

Gage Skidmore from Surprise, AZ, United States of America

]]>
Meet Vulcan: Amazon’s First Robot with a Sense of Touch https://devstyler.io/blog/2025/05/09/meet-vulcan-amazon-s-first-robot-with-a-sense-of-touch/ Fri, 09 May 2025 10:54:59 +0000 https://devstyler.io/?p=129267 ...]]> Blending physical AI with advanced robotics, Vulcan brings human-like dexterity to Amazon’s warehouses—enhancing safety, precision, and collaboration in fulfillment operations

At its Delivering the Future event in Dortmund, Germany, Amazon announced the debut of Vulcan, a groundbreaking robot that introduces a fundamental new capability to warehouse automation: the sense of touch.

Unlike previous robotic systems that rely primarily on vision and pre-programmed motions, Vulcan is engineered to physically “feel” the objects it handles—offering a major leap forward in dexterity, safety, and efficiency across Amazon’s vast network of fulfillment centers.

“Vulcan represents a fundamental leap forward in robotics,”

said Aaron Parness, Amazon’s Director of Applied Science.

“It’s not just seeing the world, it’s feeling it—enabling capabilities that were impossible for Amazon robots until now.”

The announcement was made via Amazon’s official blog post and showcased how Vulcan is already transforming warehouse operations in locations such as Spokane, Washington, and Hamburg, Germany.


From “Numb and Dumb” to Tactile Intelligence

While robots have become adept at tasks ranging from autonomous driving to cleaning pet hair, most commercial units lack a sense of touch—rendering them fragile or clumsy in environments that require nuance.

“In the past, when industrial robots have unexpected contact, they either emergency stop or smash through that contact,”

said Parness.

“They often don’t even know they have hit something.”

This is the problem Vulcan was built to solve.

Equipped with advanced force-feedback sensors and an “end of arm tooling” system that mimics human-like grip adjustments, Vulcan can manipulate items with care—gently repositioning objects inside densely packed bins without causing damage.

Its design resembles a ruler attached to a hair straightener, where:

  • The ruler component makes space in crowded bins,
  • The paddles grip and insert items using built-in conveyor belts that help “zhoop” them into place.

Vulcan’s ability to pick and stow items makes our associates’ jobs easier—and our operations more efficient.


Built for the Bin: Solving Amazon’s Unique Storage Challenge

In Amazon’s warehouses, items are stored in fabric-covered pods split into one-foot square compartments—each containing up to ten items. The irregularity and density of these compartments have long posed a challenge to robotic systems.

While earlier robots like Sparrow, Cardinal, and Robin relied on computer vision and suction cups to handle packages, they lacked the tactile intelligence to finesse objects in tight spaces. Vulcan, however, changes the game.

Using a suction-based picking arm guided by a camera and stereo vision system, Vulcan identifies items and the best gripping points while avoiding accidental co-extraction of surrounding objects—an error engineers refer to as “co-extracting non-target items.”

Vulcan can pick and stow about 75% of the diverse inventory found in fulfillment centers, performing at speeds that rival Amazon’s human workers. And when it encounters an object it can’t confidently handle, it’s smart enough to call in a human colleague—striking a balance between AI autonomy and human judgment.

Vulcan uses an arm that carries a camera and a suction cup to pick items from our storage pods.

Vulcan uses an arm that carries a camera and a suction cup to pick items from our storage pods.


Enhancing Safety and Ergonomics for Employees

One of Vulcan’s key contributions is in improving worker safety and reducing ergonomic strain.

Traditionally, reaching items stored in the top or bottom rows of pods—some as high as eight feet—required workers to use ladders or stoop to floor level. Vulcan now handles these less ergonomic zones, allowing employees to work comfortably at waist height.

“Working alongside Vulcan, we can pick and stow with greater ease,”

said Kari Freitas Hardy, a front-line employee at Amazon’s Spokane facility.

“It’s great to see how many of my co-workers have gained new job skills and taken on more technical roles.”

The company has already deployed over 750,000 robots across its operations, including systems like Proteus, Titan, and Hercules, all built to handle physically demanding tasks. Vulcan is the latest and most advanced addition in this line—focusing on precision and adaptability rather than brute strength.

Vulcan will let our associates spend less time on step ladders and more time working in their power zone.


A Decade of Robotics Innovation

Amazon’s approach to robotics has never been about building flashy tech for its own sake. Instead, the company zeroes in on specific operational problems and builds purpose-driven solutions.

“We pick out important problems and find or develop solutions—we don’t create interesting tech and then look for ways to use it,”

Parness emphasized.

Vulcan’s development began with a simple observation: each time a worker uses a ladder to access a high shelf, efficiency drops and injury risk increases. Tackling this required breakthroughs in physical AI, including:

  • Real-world training based on tactile feedback rather than simulation,
  • Algorithms to identify item types and bin availability,
  • Adaptive grip mechanics to handle everything from tubes of toothpaste to delicate electronics.

Vulcan was trained on thousands of real-world scenarios, and like a child learning through experience, it improves its understanding of object properties through trial and error.

“This is a technology that three years ago seemed impossible,”

Parness said,

“but is now set to help transform our operations.”

Vulcan represents “a technology that three years ago seemed impossible but is now set to help transform our operations,” says Aaron Parness, Amazon’s director of robotics AI.


Empowering the Workforce of the Future

The ripple effects of Vulcan extend beyond automation. As robots take on more of the physical burden, Amazon is investing in reskilling its workforce through programs like Career Choice, helping employees transition into roles like robotics maintenance and automation systems engineering.

With robots now assisting in 75% of customer orders, Amazon’s strategy appears to be less about replacing humans and more about augmenting them—with Vulcan standing as the latest proof that automation and human labor can coexist and even elevate each other.

Images/Photos: Amazon

]]>
U.S. Court Ruling Forces Apple to Open App Store to Alternative Payments, Disrupts Mobile Commerce Landscape https://devstyler.io/blog/2025/05/07/u-s-court-ruling-forces-apple-to-open-app-store-to-alternative-payments-disrupts-mobile-commerce-landscape/ Wed, 07 May 2025 12:02:02 +0000 https://devstyler.io/?p=129300 ...]]> Judgment in Epic Games case dismantles App Store exclusivity, setting the stage for fintech innovation and developer freedom

In a seismic shift for the mobile app economy, a federal judge has ruled that Apple violated a longstanding injunction by continuing to block app developers from directing users to alternative payment systems—potentially costing the tech giant billions and opening the door for new market entrants.

The decision stems from the high-profile legal battle Epic Games, Inc. v. Apple Inc. (Case No. 21-16506), first launched in 2020 when Epic challenged Apple’s control over in-app transactions. U.S. District Judge Yvonne Gonzalez Rogers, who presided over the original case, found Apple in contempt of the 2021 injunction she previously issued. 

The order prohibited Apple from preventing developers from steering users to non-App Store payment methods.

Apple’s Defiance and Legal Fallout

Despite the clear terms of the original injunction, Apple introduced a new 27% fee on purchases made outside the App Store—a move the court interpreted as a tactic to maintain its dominant revenue model.

In her ruling, Judge Gonzalez Rogers stated:

“Apple, despite knowing its obligations thereunder, thwarted the Injunction’s goals, and continued its anticompetitive conduct solely to maintain its revenue stream.”

The court also referred Apple’s Vice President of Finance, Alex Roman, to the U.S. Attorney’s Office for potentially lying under oath about the fee’s timing—an unprecedented escalation that signals the court’s frustration with Apple’s noncompliance.

Apple has filed for an emergency stay with the Ninth Circuit Court of Appeals, hoping to pause the enforcement of the ruling during the appeals process.

Consequences for Apple—and the Entire Tech Landscape

This decision does more than punish Apple; it redefines how app commerce can function on iOS. The ruling confirms that developers must be allowed to include in-app links or buttons directing users to alternative payment platforms.

Here’s what it could mean going forward:

1. Rise of Third-Party Payment Solutions

The ruling creates fertile ground for a wave of fintech innovation. Startups and established players like Stripe, PayPal, and Adyen are likely to roll out mobile-native payment tools that integrate seamlessly into apps. These solutions could offer lower fees, better user experience, and more flexible payment models, threatening Apple’s dominance.

2. Reduced Costs and New Business Models for Developers

Smaller developers and content creators—long squeezed by Apple’s 15–30% commission—now have a chance to take control of their payment flow. Subscription-based apps, games, and streaming services may lower prices or introduce new service bundles via direct web links.

3. Global Ripple Effects

This case complements regulatory moves abroad, including the EU’s Digital Markets Act and South Korea’s App Store reform. Together, these efforts are reshaping global norms around platform governance and competition.

4. Apple’s Next Move: Compliance or Control?

While Apple must comply with the order, industry analysts expect the company to introduce new terms that retain some leverage—such as additional security vetting, administrative fees, or user interface restrictions for apps using third-party payments.

“This ruling may mark the beginning of a parallel app commerce economy—one where Apple no longer owns the toll booth,”

said Antonia Ramirez, a legal scholar focused on digital markets.

“It’s not just a legal victory for developers—it’s a green light for innovation.”

Looking Ahead

Apple’s appeal may yet delay the practical implementation of the ruling, but the legal foundation has been laid. The mobile app ecosystem—long shaped by Apple’s strict controls—now faces a reckoning. Developers, users, and competitors alike are watching closely as the rules of digital commerce are rewritten in real time.

Image: Freepik

]]>
IBM CEO Declares: ‘The Era of AI Experimentation Is Over’ at THINK 2025 https://devstyler.io/blog/2025/05/06/ibm-ceo-declares-the-era-of-ai-experimentation-is-over-at-think-2025/ Tue, 06 May 2025 08:49:26 +0000 https://devstyler.io/?p=129180 ...]]> New AI and hybrid cloud innovations from IBM aim to accelerate enterprise adoption, streamline integration, and unlock the full value of unstructured data.

IBM has introduced a suite of new technologies aimed at scaling enterprise AI across hybrid environments, as announced today at its annual THINK conference. This move reinforces IBM’s commitment to breaking down barriers to AI deployment with integrated solutions that unify data, orchestrate AI agents, and simplify enterprise operations.

With over one billion applications projected to emerge by 2028, businesses are under increasing pressure to streamline operations across fragmented systems. IBM is responding with hybrid technologies, enhanced agent capabilities, and the consulting expertise of IBM Consulting, aiming to help clients accelerate AI adoption and realize measurable business value.

Arvind Krishna, Chairman and CEO of IBM

Arvind Krishna, Chairman and CEO of IBM

“The era of AI experimentation is over,”

said Arvind Krishna, Chairman and CEO of IBM.

“Today’s competitive advantage comes from purpose-built AI integration that drives measurable business outcomes.”

Enterprise AI Agents Powered by watsonx Orchestrate

Central to IBM’s announcement is the expanded functionality of watsonx Orchestrate, designed to help businesses create, deploy, and manage AI agents across a wide array of enterprise tools.

Key features include:

  • Agent Builder: Enables users to create custom AI agents in under five minutes using no-code to pro-code tools.
  • Pre-built Domain Agents: Ready-to-use agents tailored for HR, sales, procurement, and more.
  • Wide Integration: Supports over 80 leading enterprise applications, including solutions from Adobe, AWS, Microsoft, Oracle, Salesforce Agentforce, SAP, ServiceNow, and Workday.
  • Agent Orchestration: Coordinates multi-agent workflows and tools across vendors.
  • Agent Observability: Provides monitoring, guardrails, and lifecycle governance.

IBM also unveiled the Agent Catalog, offering access to over 150 agents and tools co-developed with partners like Box, Mastercard, Symplistic.ai, 11x, and others. Sample integrations include a Salesforce-native prospecting agent and a conversational HR agent for Slack.

Tackling Integration Challenges with webMethods Hybrid Integration

A common obstacle to enterprise AI adoption is integration complexity. IBM is addressing this with the launch of webMethods Hybrid Integration, a solution for automating workflows across applications, APIs, partners, and clouds.

According to a Forrester Total Economic Impact (TEI) study, organizations using webMethods realized:

  • 176% ROI over three years
  • 40% reduction in downtime
  • Up to 67% time savings on project execution
  • Improved ease of use, security, and visibility

This integration technology complements IBM’s existing automation offerings and is enhanced by collaborations with HashiCorp, including integrations with Terraform and Vault, to support secure, scalable hybrid cloud operations.

Unlocking the Value of Unstructured Data

Unstructured data—such as contracts, spreadsheets, and presentations—represents a largely untapped asset in enterprise AI. IBM is advancing watsonx.data to help organizations activate this data with greater accuracy.

Highlights include:

  • Open Data Lakehouse: Now with data fabric features like lineage tracking and governance.
  • watsonx.data integration: A unified tool for orchestrating data pipelines across formats.
  • watsonx.data intelligence: AI-driven tools for deep insights from unstructured data.

Early testing indicates watsonx.data enables up to 40% more accurate AI compared to traditional retrieval-augmented generation (RAG) methods.

In support of this strategy, IBM recently announced plans to acquire DataStax, a leader in unstructured data handling for generative AI. Additionally, watsonx is now integrated with Meta’s Llama Stack, further enhancing its generative AI capabilities.

IBM’s Content-Aware Storage (CAS) is also now available on IBM Fusion, with support for IBM Storage Scale arriving in Q3. CAS enables contextual processing of unstructured data to speed up AI inferencing.

Infrastructure for AI at Scale: Introducing IBM LinuxONE 5

To support high-performance AI workloads, IBM introduced IBM LinuxONE 5, its most secure and powerful Linux platform to date. It can process up to 450 billion AI inference operations per day.

Innovations include:

  • Telum II AI Processor & IBM Spyre Accelerator: High-speed processing for generative AI, with Spyre available in late 2025.
  • Confidential Containers & Quantum-Safe Encryption: Advanced security for sensitive workloads.
  • Cost and Energy Efficiency: Compared to x86 systems, LinuxONE 5 can lower total cost of ownership by up to 44% over five years.

IBM also expanded partnerships with AMD, Intel, CoreWeave, and NVIDIA to deliver new compute and storage solutions for AI-driven applications.

The Road Ahead

IBM’s latest announcements represent a strategic push toward operationalizing AI at enterprise scale through modular, secure, and hybrid-ready technologies. As businesses move from AI experimentation to enterprise-wide adoption, IBM aims to be the infrastructure backbone enabling that transformation.

Image: IBM video

]]>
Zencoder Acquires Machinet to Expand AI Coding Assistant Ecosystem https://devstyler.io/blog/2025/04/25/zen-coder-acquires-machinet-to-expand-ai-coding-assistant-ecosystem/ Fri, 25 Apr 2025 09:25:21 +0000 https://devstyler.io/?p=128908 ...]]> Acquisition strengthens Zencoder’s position in the AI coding market and brings enhanced JetBrains IDE support to Machinet users.

Zencoder, a provider of AI agents integrated directly into developers’ environments, announced today that it has acquired Machinet, a developer of context-aware AI coding assistants with over 100,000 downloads in JetBrains IDEs. The strategic acquisition further cements Zencoder’s position in the rapidly expanding AI coding assistant market, while broadening its multi-integration ecosystem across popular development platforms.

Expanding the Developer Experience

Following the acquisition, Machinet users will gain access to a significantly enhanced developer experience, including:

  • Enhanced JetBrains Integration

By combining Machinet’s specialized expertise in JetBrains IDEs with Zencoder’s existing support, developers can look forward to even more powerful tools tailored for these widely-used environments.

  • Augmented Unit Testing

Machinet’s context-aware unit test generation technology will be integrated with Zencoder’s advanced testing agents, offering developers a more comprehensive and automated testing experience.

  • Industry-Leading Customization

Machinet’s developer community will now benefit from Zencoder’s deep capabilities in understanding large codebases, adapting to team-specific coding styles, and aligning with organizational architecture patterns.

“This acquisition aligns perfectly with our mission to turn everyone into a 10x engineer by providing AI solutions that handle routine coding tasks and let developers focus on innovation,”

said Andrew Filev, CEO and Founder of Zencoder.

“By bringing our advanced coding agent to Machinet’s thriving JetBrains community, we’re fulfilling our mission to deliver the best AI coding experience regardless of development environment.”

Streamlined Transition for Customers

As part of the acquisition, Machinet’s domain and marketplace presence will be transferred to Zencoder. Current Machinet customers will receive detailed guidance on transitioning to Zencoder’s platform, which leverages its proprietary Repo Grokking technology and AI agents.

Existing Machinet users will now gain access to Zencoder’s full feature set, including:

  • Advanced multi-file editing and refactoring capabilities
  • Deep codebase understanding across repositories using Repo Grokking™
  • Sophisticated self-repair mechanisms that automatically test and refine outputs
  • Expanded integration with over 20 developer tools, including Jira, GitHub, and GitLab
  • Access to Zencoder’s specialized coding and unit testing AI agents

Industry-Leading Performance

Earlier this year, Zencoder’s AI platform demonstrated benchmark-breaking performance:

  • A 2x improvement over previous best results on SWE-Bench-Multimodal
  • State-of-the-art results on the challenging “IC SWE (Diamond)” section of SWE-Lancer, outperforming top published results by 23%

With the integration of Machinet’s technologies, Zencoder aims to further enhance its capabilities, reinforcing its leadership position in AI-assisted software development.

Availability and Next Steps

Zencoder’s full suite of AI coding and testing tools, including the newly enhanced JetBrains integration, is now available through zencoder.ai, with subscription options ranging from free basic plans to comprehensive enterprise solutions.

Current Machinet users will be provided with detailed transition instructions in the coming weeks.

About Zencoder

Based in Silicon Valley, Zencoder offers powerful AI coding and testing agents designed to empower professional developers. Founded by serial entrepreneur Andrew Filev, Zencoder’s globally distributed team of over 50 engineers helps organizations accelerate innovation and ship impactful software faster. The company holds ISO 27001 certification, is SOC 2 Type II compliant, and is in the process of finalizing its ISO 42001 certification.

]]>
Snyk Unveils AI-Powered DAST Platform to Secure the Future of Software Development https://devstyler.io/blog/2025/04/24/snyk-unveils-ai-powered-dast-platform-to-secure-the-future-of-software-development/ Thu, 24 Apr 2025 07:06:23 +0000 https://devstyler.io/?p=128838 ...]]> How Snyk’s AI-Powered DAST Tool Is Redefining Application Security for the Next Generation of Software Development

Snyk has introduced a groundbreaking solution to tackle the next wave of cybersecurity challenges. The company announced the launch of Snyk API & Web, a next-generation Dynamic Application Security Testing (DAST) tool engineered to secure modern, AI-powered applications.

Bridging the Security Gap in AI-Driven Development

Traditional DAST solutions have long struggled to keep up with the rapidly evolving architectures and increasing complexity of APIs brought by AI-centric development. Recognizing this critical need, Snyk acquired Probely in 2024, integrating its advanced DAST capabilities into Snyk’s broader security ecosystem—complementing existing offerings like SAST, SCA, container, and Infrastructure as Code security.

The result is a unified platform that delivers a holistic, end-to-end view of application security throughout the entire Software Development Life Cycle (SDLC).

AI-Powered Innovations for Modern Applications

Snyk API & Web introduces several forward-thinking features designed specifically for the AI era:

  • AI-Driven API Testing: Leveraging in-house fine-tuned Large Language Models (LLMs), the platform automates API discovery and vulnerability scanning—detecting complex issues like Broken Object Level Authorization (BOLA) more efficiently.
  • Code-Informed Dynamic Testing: By correlating DAST results with SAST findings, Snyk provides deeper context for vulnerabilities, supporting more accurate prioritization and enabling AI-powered auto-remediation via DeepCode AI Fix.
  • CI/CD-Ready: Designed with DevOps in mind, the tool offers seamless CI/CD integration, allowing developers to perform self-service scans within pipelines, guided by organizational AppSec policies.

Strategic Vision from the Top

“Our vision is to empower developers and AppSec teams to secure their entire application surface without slowing down innovation,”

said Geva Solomonovich, Snyk CIO.

“Snyk API & Web is the culmination of that vision—bringing together the speed of AI with the depth of full-lifecycle security.”

Momentum and Market Response

Since integrating Probely, Snyk has reported a 245% quarter-over-quarter growth in Annual Recurring Revenue (ARR) for DAST services—a clear signal of strong market demand for modern, AI-augmented security tools.

Looking forward, the company plans to deepen its AI capabilities, broaden API coverage, and provide even richer context for faster, more accurate vulnerability remediation.

]]>
TechCrunch Revealed How Jack Dorsey Laid Off 931 of Block’s Staff — Read the Email He Sent https://devstyler.io/blog/2025/03/26/techcrunch-revealed-how-jack-dorsey-laid-off-931-of-block-s-staff-read-the-email-he-sent/ Wed, 26 Mar 2025 08:01:52 +0000 https://devstyler.io/?p=127982 ...]]> The Block CEO announced the layoffs in an internal email outlining strategic shifts, performance reviews, and management restructuring

Financial technology giant Block, Inc., the parent company of Cash App and Square, has laid off 931 employees—approximately 8% of its global workforce—according to a leaked internal message obtained and published by TechCrunch. The announcement was made in an email sent to employees by co-founder and CEO Jack Dorsey on Tuesday.

In the email, Dorsey outlined that the layoffs are part of broader organizational changes and not tied to financial issues or AI-driven automation. Rather, the move is aimed at streamlining the company’s structure, accelerating performance standards, and aligning with new strategic priorities.

Breakdown of Layoffs

According to Dorsey’s message, the job cuts fall into three categories:

  • Strategy: 391 employees are being let go as part of strategic shifts within the company.
  • Performance: The largest group—460 employees—are being cut due to underperformance or a trend toward underperformance based on internal metrics.
  • Management Restructuring: 80 managerial roles have been eliminated to flatten the company’s hierarchy to what Dorsey calls “innercore+4,” meaning his direct reports and four levels below them. Additionally, 193 managers are being reassigned to individual contributor roles.

The company is also closing 748 open positions, with exceptions made for roles that are in the final stages of the hiring process or deemed critical to operations and leadership.

Not the First Cut

This is not the first significant round of layoffs at Block in recent months. In January 2024, the company reduced its workforce by about 1,000 employees. As of its most recent filing in December 2024, Block reported having roughly 11,300 employees globally.

Despite the magnitude of the layoffs, Dorsey emphasized that they are part of an effort to make the organization more agile and performance-focused.

“We are raising the bar and acting faster on performance,”

he wrote in the internal message, published by the leading tech media.

The full text of Jack Dorsey’s leaked email can be read in TechCrunch’s original publication here.

Block has not yet issued a public comment on the matter, TechCrunch reported.

Image by Mark Warner is licensed under CC BY-SA 2.0

Source: TechCrunch

]]>
Google Unveils Gemini 2.5: Its Most Intelligent AI Model Yet https://devstyler.io/blog/2025/03/25/google-unveils-gemini-2-5-its-most-intelligent-ai-model-yet/ Tue, 25 Mar 2025 06:31:39 +0000 https://devstyler.io/?p=127948 ...]]> Topping global benchmarks and redefining reasoning in AI, Gemini 2.5 Pro brings unprecedented accuracy, coding power, and context-aware performance 

Google has introduced Gemini 2.5, its most advanced AI model to date, marking a major leap in artificial intelligence with enhanced reasoning and performance across a wide array of complex tasks. Released today as an experimental version of Gemini 2.5 Pro, the model is already topping industry benchmarks and is now available in Google AI Studio and the Gemini app for Advanced users.

gemini_benchmarks

Gemini 2.5 Pro Experimental tops the LMArena leaderboard. Image: Google

Described as a “thinking model,” Gemini 2.5 is engineered to reason through problems before generating a response — a capability that significantly improves accuracy, contextual understanding, and decision-making. Unlike earlier models focused mainly on pattern recognition and prediction, Gemini 2.5 emphasizes logical analysis, contextual nuance, and informed decision-making.

“We’re building these thinking capabilities directly into all of our models going forward,”

Google stated in the launch announcement.

“This enables our AI to solve more complex problems and support more context-aware, capable agents.”

The new model debuts at #1 on the LMArena leaderboard, a benchmark driven by human preferences, highlighting its superior reasoning, coding, and stylistic coherence. It also performs exceptionally well on advanced math and science benchmarks, including GPQA and AIME 2025, without relying on expensive test-time methods like majority voting.

Notably, Gemini 2.5 Pro achieved a state-of-the-art 18.8% score on Humanity’s Last Exam, a rigorous dataset built by hundreds of subject matter experts to test human-level reasoning across disciplines.

It also scores a state-of-the-art 18.8% across models without tool use on Humanity’s Last Exam

The model also scores a state-of-the-art 18.8% across models without tool use on Humanity’s Last Exam

In coding, Gemini 2.5 Pro represents a significant jump from its predecessor, Gemini 2.0. It excels in generating functional, visually appealing web apps, agentic code transformations, and code editing. It also leads on SWE-Bench Verified, the industry standard for evaluating agentic coding abilities, with a score of 63.8% using a custom agent configuration.

As a showcase of its power, Google shared an example where Gemini 2.5 Pro generates a fully executable video game from a single-line prompt — demonstrating its potential for developers, educators, and creators alike.

Pricing for Gemini 2.5 Pro will be announced in the coming weeks, with options for higher rate limits and scaled production use. The model will soon be available on Vertex AI, further expanding its reach across Google’s ecosystem. Gemini 2.5 Pro is available now in Google AI Studio and in the Gemini app for Gemini Advanced users, and will be coming to Vertex AI soon.

]]>
OpenAI Launches Advanced AI-Powered Speech-to-Text and Text-to-Speech Models https://devstyler.io/blog/2025/03/21/openai-launches-advanced-ai-powered-speech-to-text-and-text-to-speech-models/ Thu, 20 Mar 2025 22:09:05 +0000 https://devstyler.io/?p=127762 ...]]> New AI models enhance transcription accuracy and enable expressive, customizable voice interactions.

In a significant leap forward for artificial intelligence-driven voice technology, OpenAI has unveiled its latest speech-to-text and text-to-speech audio models. This release marks a major milestone in developing more intuitive, customizable, and accurate AI voice agents.

Revolutionizing AI-Driven Voice Agents

Over the past few months, the company has been dedicated to advancing the intelligence, capabilities, and practical applications of text-based AI agents. Their previous innovations, including Operator, Deep Research, Computer-Using Agents, and the Responses API, have laid the groundwork for more sophisticated AI interactions. However, true usability demands deeper and more natural engagements beyond simple text-based conversations.

With the launch of these cutting-edge audio models, developers now have access to powerful tools that enhance AI’s ability to understand and generate human speech with remarkable accuracy and expression. These models provide a new benchmark in speech technology, significantly improving performance in challenging scenarios such as diverse accents, noisy environments, and varying speech speeds.

Breakthroughs in Speech-to-Text Accuracy

The newly introduced gpt-4o-transcribe and gpt-4o-mini-transcribe models exhibit remarkable advancements in word error rate reduction and language recognition. Outperforming previous Whisper models, these models utilize reinforcement learning and extensive training on high-quality audio datasets, leading to:

  • Enhanced transcription reliability
  • Improved recognition of nuanced speech patterns
  • Reduction in misinterpretations across different speech conditions

These advancements make them particularly well-suited for applications such as customer service call centers, meeting transcription services, and accessibility tools for users with hearing impairments.

Next-Level Customization with Text-to-Speech

For the first time, developers can instruct AI voice models not just on what to say, but also on how to say it. The gpt-4o-mini-tts model introduces a new level of steerability, allowing users to dictate tone and style—for example, requesting speech in the manner of a “sympathetic customer service agent.” This unlocks potential applications for:

  • Dynamic customer support interactions
  • Expressive narration for audiobooks and storytelling
  • More human-like AI companions for various digital interfaces

While the current models are limited to artificial, preset voices, ongoing monitoring ensures they align with synthetic voice standards.

Innovations Driving the New Audio Models

The company’s latest AI speech models are built upon extensive research and cutting-edge methodologies, including:

  • Pretraining with authentic audio datasets: Using specialized datasets tailored to speech applications, the models capture and interpret speech nuances with exceptional precision.
  • Advanced distillation techniques: The use of self-play methodologies ensures realistic conversational dynamics, enhancing user-agent interactions.
  • Reinforcement learning enhancements: By leveraging RL-heavy paradigms, the speech-to-text models achieve unprecedented levels of accuracy, reducing hallucinations and misrecognitions.

API Availability

These new audio models are now available via OpenAI’s API, empowering developers to build more responsive and interactive AI-driven voice applications. Additionally, an Agents SDK integration simplifies the development process for those looking to incorporate AI voice interactions seamlessly.

Future Developments

Looking ahead, OpenAI plans to expand its investments in multimodal AI experiences, including video, to further enhance agentic interactions. Future efforts will also explore custom voice development while ensuring adherence to ethical and safety standards. As AI continues to evolve, these advancements pave the way for more sophisticated and natural human-machine communication.

]]>