Robotics – Devstyler.io https://devstyler.io News for developers from tech to lifestyle Wed, 27 Mar 2024 14:39:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Top Trends at NVIDIA GTC 2024 https://devstyler.io/blog/2024/03/27/top-trends-at-nvidia-gtc-2024/ Wed, 27 Mar 2024 14:39:01 +0000 https://devstyler.io/?p=120572 ...]]> NVIDIA showcased innovative technologies during NVIDIA GTC 2024 in San Jose, California. And although they all have different purposes, these technologies are united by one common element – generative AI, which is the center of attention.

Below you’ll see some of the tech trends showcased during NVIDIA GTC that we hear about from TechRepublic.

Advanced generation with advanced search

A technology that aims to reduce AI “hallucinations” or inaccuracies, advanced search generation allows the generative AI model to check its work against external resources, such as scientific papers or documents. RAG appeals to enterprise customers because it increases the reliability of the generated content.

“AI factories” to increase storage and computing needs

Many NVIDIA GTC organizations have described themselves as “AI factories” that provide enterprises with access to the storage and computing power they need to create artificial intelligence.

NexGen Cloud, which calls this service “GPUaaS,” is among the companies that will provide access to NVIDIA’s 10 trillion-parameter Blackwell GPU (Figure A) later this year.

Ten trillion parameter jobs require a lot of compute, and organizations are betting they can create a business model by providing just the right amount of compute power to customers.

During a preview briefing on March 15, Greg Findlen, senior vice president of product management for data management at Dell, shared that data storage must support high-performance structured data as well as unstructured data, such as documents, images and video.

Edge AI

Organizations that focus on edge AI also took their place at NVIDIA GTC 2024, with a wide variety: robotics, automotive, industrial, healthcare, mission-critical systems, and retail.

Many of these were powered by NVIDIA’s Jetson robotics platform. Jetson Orin’s NVIDIA Metropolis microservices allow developers to use API calls to create generative AI capabilities, making robots more reactive and flexible to their environment.

Private AI for enterprises

Organizations are working on creating private generative AI that can securely access their own data while providing the flexibility of a publicly available AI like ChatGPT.

During the show, NVIDIA mentioned Mistral AI several times, which provides a large open source language model that customers can host on their own servers.

Copilots can draw on company-owned data

NVIDIA GTC features a wide range of AI Copilots that can draw answers from specific, company-owned structured and unstructured data.

]]>
Questions and Answers about Robotics by Matthew Johnson-Roberson https://devstyler.io/blog/2023/11/13/questions-and-answers-about-robotics-by-matthew-johnson-roberson/ Mon, 13 Nov 2023 10:11:08 +0000 https://devstyler.io/?p=113775 ...]]> Over the coming weeks, TechCrunch will be taking us through Robotics in more detail, talking to experts in the field who share more about it.

At the beginning of the week, we chose to share with you a curious conversation between TechCrunch and Matthew Johnson-Roberson, in which he shares more about Robotics, its future, the role of Artificial Intelligence and other interesting issues related to Robotics.

Matthew Johnson-Roberson is an American researcher, entrepreneur and educator. As of January 2022, he is the director of the Robotics Institute at Carnegie Mellon University. Prior to that, he was a professor at the University of Michigan’s College of Engineering since 2013, where he co-directed UM’s Ford Center for Autonomous Vehicles (FCAV) with Ram Vasudevan.

His research focuses on computer vision and artificial intelligence, with specific applications to autonomous underwater vehicles and self-driving cars. He is also the co-founder and CTO of Refraction AI, a company focused on providing last-mile autonomous delivery.

What role(s) will generative AI play in the future of robotics?

Generative AI, through its ability to generate novel data and solutions, will significantly bolster the capabilities of robots. It could enable them to better generalize across a wide range of tasks, enhance their adaptability to new environments, and improve their ability to autonomously learn and evolve.

What are your thoughts on the humanoid form factor?

The humanoid form factor is a really complex engineering and design challenge. The desire to mimic human movement and interaction creates a high bar for actuators and control systems. It also presents unique challenges in terms of balance and coordination. Despite these challenges, the humanoid form has the potential to be extremely versatile and intuitively usable in a variety of social and practical contexts, mirroring the natural human interface and interaction. But we probably will see other platforms succeed before these.

Following manufacturing and warehouses, what is the next major category for robotics?

Beyond manufacturing and warehousing, the agricultural sector presents a huge opportunity for robotics to tackle challenges of labor shortage, efficiency, and sustainability. Transportation and last-mile delivery are other arenas where robotics can drive efficiency, reduce costs, and improve service levels. These domains will likely see accelerated adoption of robotic solutions as the technologies mature and as regulatory frameworks evolve to support wider deployment.

How far out are true general-purpose robots?

The advent of true general-purpose robots, capable of performing a wide range of tasks across different environments, may still be a distant reality. It requires breakthroughs in multiple fields including AI, machine learning, materials science, and control systems. The journey toward achieving such versatility is a step-by-step process where robots will gradually evolve from being task-specific to being more multi-functional and eventually general purpose.

Will home robots (beyond vacuums) take off in the next decade?

The next decade might witness the emergence of home robots in specific niches, such as eldercare or home security. However, the vision of having a general-purpose domestic robot that can autonomously perform a variety of household tasks is likely further off. The challenges are not just technological but also include aspects like affordability, user acceptance, and ethical considerations.

What important robotics story/trend isn’t getting enough coverage?

Despite significant advancements in certain niche areas and successful robotic implementations in specific industries, these stories often get overshadowed by the allure of more futuristic or general-purpose robotic narratives. The incremental but impactful successes in sectors like agriculture, healthcare, or specialized industrial applications deserve more spotlight as they represent the real, tangible progress in the field of robotics.

]]>
Top 3 Programming Languages for AI Development https://devstyler.io/blog/2023/06/14/top-3-programming-languages-for-ai-development/ Wed, 14 Jun 2023 07:53:35 +0000 https://devstyler.io/?p=107742 ...]]> Artificial intelligence encompasses a range of technologies including machine learning, natural language processing, robotics and many more. As you know, programming is a necessary component for developing artificial intelligence (AI) systems, and choosing the right language to learn can help you get started in this fast-growing industry.

In the context of artificial intelligence, programming involves creating algorithms that allow machines to learn, reason and make human-like decisions. In the ever-evolving world of artificial intelligence, staying on top of new developments is crucial for any developer who wants to exploit the capabilities of artificial intelligence.

Python
Data scientists often use it because it’s easy to learn and offers flexibility, intuitive design, and versatility. One of the primary reasons for its popularity is its readability, which makes it easy for developers to write and understand code. Python is also an interpreted language, meaning it doesn’t need to be compiled before running, saving time and effort.

Python offers several advantages for AI development, including:

-Easy to learn: Python has a simple and intuitive syntax that’s easy to learn, making it an ideal language for beginners.
-Large community: Python has a vast community of developers who constantly contribute to its development, creating new libraries and tools that make it even more efficient for AI development.
-Wide range of libraries: Python has a vast library of pre-built modules and packages that can be used for AI development, such as NumPy, SciPy, Pandas, and TensorFlow.
-Interpreted language: Python is an interpreted language, meaning it doesn’t need to be compiled before running, saving time and effort.
-Platform-independent: Python can run on different platforms, such as Windows, Linux, and macOS, making it easier for developers to work on different machines.

Java
Java is a general-purpose programming language widely used in building AI applications. Its strengths lie in its ability to handle large-scale projects, platform independence, and strong memory management. Here are some reasons why Java is useful for AI development:

-Platform independence: Java code can run on multiple operating systems, making it a universal language for AI development that can be used across different devices and platforms.
-Large developer community: Java has a large community of developers who contribute to developing new tools and libraries for AI development.
-Object-oriented programming: Java’s object-oriented programming features can make it easier to write modular, reusable, and scalable code. This is especially useful for building complex AI applications.

C++
C++ is another high-performance programming language well-suited for building AI applications that require speed and efficiency. Its strengths lie in the following:

-Its ability to handle low-level programming;
-Its memory management;
-Its ability to compile machine code;
-One of the most significant advantages of using C++ for AI development is its speed. It’s one of the fastest programming languages available, making it great for AI applications that require real-time processing. Additionally, C++ is a cross-platform language, meaning that code can be compiled for different operating systems, making it versatile for AI development.

]]>
NVIDIA Chief Scientist Inducted into Silicon Valley Engineers Hall of Fame https://devstyler.io/blog/2023/03/06/nvidia-chief-scientist-inducted-into-silicon-valley-engineers-hall-of-fame/ Mon, 06 Mar 2023 09:35:38 +0000 https://devstyler.io/?p=102528 ...]]> From climbing mountains in the annual California Death Ride bike challenge to creating a low-cost open source fan in the early days of the COVID-19 pandemic, NVIDIA Chief Scientist Bill Daly is no stranger to accomplishing near-impossible feats.

On Friday, he achieved another rare milestone: induction into the Silicon Valley Engineers Council Hall of Fame.

The purpose of the council – a coalition of engineering societies, including the Institute of Electrical and Electronics Engineers, SAE International and the Association for Computing Machinery – is to promote engineering programs and improve community cohesion through science.

Past members of the Hall of Fame include such industry notables as Intel founders Robert Noyce and Gordon Moore, former Stanford University president and MIPS founder John Hennessy, and Google Distinguished Engineer and University of California at Berkeley Professor Emeritus David Patterson.

Recognition as an “Industry Leader”

“I am honored to be inducted into the Silicon Valley Hall of Fame. The work that goes into being recognized as part of the Hall of Fame is part of a great team effort. Many faculty and students were involved in stream processing research at Stanford, and a very large team at NVIDIA was involved in translating that research into GPUs. Now is a really exciting time to be a computer engineer.”

Accepting the award, Daly said:

“The future is bright with many more demanding applications waiting to be accelerated using the principles of stream processing and accelerated computing”

– he said.

His induction began with a video featuring colleagues and friends spanning his career at Caltech, MIT, Stanford and NVIDIA.

In the video, NVIDIA founder and CEO Jensen Huang described Daly as “an extraordinary scientist, engineer, leader and incredible human being.”

Fei-Fei Li, Stanford professor of computer science and co-director of the Stanford Institute for Human-Centered AI, speaks highly of Daly’s “journey from a world-class academic scientist and researcher to an industry leader” who led one of the “greatest digital AI revolutions of our time – both software and hardware.”

After the tribute video, Fred Barrez, chairman of the Hall of Fame committee and professor of mechanical engineering at San Jose State University, took the stage and said:

“This year’s award winner has made significant contributions not only to his profession, but to Silicon Valley and beyond.”

At the heart of the GPU revolution
As head of NVIDIA Research for nearly 15 years, Daly has built a team of more than 300 scientists worldwide, with groups spanning a wide range of topics including artificial intelligence, graphics, simulation, computer vision, self-driving cars and robotics.

Prior to joining NVIDIA, Daly developed cutting-edge engineering at some of the world’s top academic institutions. His development of streaming processing at Stanford led directly to GPU computing, and his contributions are responsible for much of the technology used in high-performance computing networks today.

]]>
Stilride Adds 3D Printing Capabilities to Its Arsenal by Partnering with Adaxis https://devstyler.io/blog/2023/02/16/stilride-adds-3d-printing-capabilities-to-its-arsenal-by-partnering-with-adaxis/ Thu, 16 Feb 2023 12:59:54 +0000 https://devstyler.io/?p=101539 ...]]> Swedish start-up Stilride has partnered with another space innovation start-up, I.S.A.A.C, to explore the possibilities of applying its technology in space, and has been named as one of the partners in a project being developed by electric car company Polestar, which aims to create the world’s first climate-neutral car, tech.eu reports.

Stilride may be starting out by producing an electric motorcycle, but the company clearly has aspirations that are literally out of this world.

“The team at Adaxis has a huge amount of knowledge and experience in robotics and optimising robotic construction, so it’s great to have them on board to strengthen the capabilities of our tech,”

said Stilride co-founder and CEO Jonas Nvyang.

Thanks to a partnership announced today with French-Swedish robotics company Adaxis, the company will no longer need to source components, including hinges, fenders, and side covers, from outside suppliers.

“Not only will their technology improve the sustainability, speed, and cost-efficiency of producing the SUS1, but it will also help us reach our ultimate goal of rolling out a fully distributed production model where the construction of our products can be fully automated, powered by robotics technology.”

Jonas Nvyang continued.

Not only does this mean Stilride will be able to manufacture a number of these parts in-house, with Adaxis’ software allowing engineers to program a robotic arm to quickly 3D print large and complex parts, but it will also significantly reduce costs and material wastage, as well as possibly increasing quality control.

]]>
Amazon presents the AWS IoT RoboRunner https://devstyler.io/blog/2022/02/14/amazon-presents-the-aws-iot-roborunner/ Mon, 14 Feb 2022 12:20:10 +0000 https://devstyler.io/?p=81082 ...]]> Amazon will be about to provide a new service to help companies build and deploy robotics management applications with the AWS IoT RoboRunner

The  IoT RoboRunner is developed from technology already in use at Amazon warehouses and provides an infrastructure to connect fleets of robots and automation software, says InfoQ.

Amazon’s new service works across operations, managing data types like location and robotic task data in a central repository. Channy Yun, a principal developer advocate at AWS, explains how the new service can help robot operators:

“Many customers choose different types of robots – often from different vendors in a single facility. Robot operators want to access the unified data required to build applications that work across a fleet of robots. However, when a new robot is added to an autonomous operation, complex and time-consuming software integration work is required to connect the robot control software to work management systems.”

The new service will connect robots and different work management systems with the Fleet Manager and Task Manager libraries. In that way, it will orchestrate work through a single system view and address interoperability problems. Chip Childers, chief architect at Puppet, tweets:

In fact, among the suggested use cases for the new service are multi-robot collaboration, health status monitoring, and space management. With the help of IoT RoboRunner’s APIs, developers can integrate new robot vendors and build management applications: once a customized task manager code has been developed and tested, it can be deployed as a Lambda function, while the fleet manager gateway code can run as an IoT Greengrass component.

Amazon wrote four RoboRunner tutorials to develop a custom task manager for fleet orchestration, a gateway that serves as a message routing layer, read robot and task status, and create a mission to monitor robot states. There are three registries storing types of data in The centralized data repository: vehicle/fleet registry, destination registry, and task registry.

The AWS IoT RoboRunner is available in preview in the us-east-1 and EU-central-1 regions. This eventually consistent system has no additional costs during the preview period.

]]>
Are Robots capable of Walking through a Labyrinth? https://devstyler.io/blog/2021/12/13/are-robots-capable-of-walking-through-a-labyrinth/ Mon, 13 Dec 2021 10:50:29 +0000 https://devstyler.io/?p=76578 ...]]> Is it possible for robots to learn how to successfully navigate the twists and turns of a  labyrinth? Well, researchers at the Eindhoven University of Technology (TU/e) in the Netherlands and the Max Planck Institute for Polymer Research in Mainz, Germany, have made it possible and proved that there is no such thing as “impossible” when it comes to technology.

However, machine learning, like every successful thing in this world, has its disadvantages. One of them is consuming too much human brain mimicry.

As we know, there are neurons in our brain that communicate with one another through so-called synapses. They are strengthened each time information flows through them. It is this plasticity that ensures that humans remember and learn, and researchers find inspiration in it in order to create a more efficient machine.  Imke Krauhausen, Ph.D. student at the Department of Mechanical Engineering at TU/e, explains:

“In our research, we have taken this model to develop a robot that is able to learn to move through a labyrinth. Just as a synapse in a mouse brain is strengthened each time it takes the correct turn in a psychologist’s maze, our device is ‘tuned’ by applying a certain amount of electricity. By tuning the resistance in the device, you change the voltage that controls the motors. They, in turn, determine whether the robot turns right or left.”

Krauhausen and her team created a robot with the help of a robotics kit, made by Lego. It is a Mindstorms EV3 and it is equipped with two wheels, traditional guiding software which is supposed to make sure it can follow a line, and a number of reflectance and touch sensors.

When put in a maze, the robot is told to turn to either return or to turn left every time it reaches a dead-end or diverges from the designated path to the exit. Krauhausen says:

“In the end, it took our robot 16 runs to find the exit successfully. And, what’s more, once it has learned to navigate this specific route (target path 1), it can navigate any other path that it is given in one go (target path 2). So, the knowledge it has acquired is generalizable.”

Organic material is used for the neuromorphic robot. It is both stable and able to maintain a large part of the specific states in which it has been tuned during the various runs through the labyrinth. This ensures that the learned behavior ‘sticks’, just like neurons and synapses in a human brain remember events or actions.

During the research, which dated from 2015 and 2017, the material was proved to be able to tune in a much larger range of conduction than inorganic and materials, and that it is able to ‘remember’ or store learned states for extended periods. Since then, organic devices have become a hot topic in the field of hardware-based artificial neural networks. Krauhausen added:

“Because of their organic nature, these smart devices can in principle be integrated with actual nerve cells. Say you lost your arm during an injury. Then you could potentially use these devices to link your body to a bionic hand,” says Krauhausen.

She also said that their robots rely on traditional software to move around. She admitted that she will be working on developing in the next phase of her research.

]]>
Is it true that “Living robots” are now capable of reproducing? https://devstyler.io/blog/2021/12/01/is-it-true-that-living-robots-are-now-capable-of-reproducing/ Wed, 01 Dec 2021 12:04:51 +0000 https://devstyler.io/?p=75917 ...]]> It is an indisputable fact that technologies are developing really fast and soon, they will replace a big part of the traditional ways we manage to solve some problems. One of the most discussed topics is robotics and the way they developed through the time. 

Scientists who are working really hard on their robots are doing their best in order to improve their products’ functions and options. Now, the first “living robots” are capable of the most essential part of any species’ survival – reproduction. 

According to new research in the Proceedings of the National Academy of Sciences, those robots are known as Xenobots, as the organisms use an entirely novel form of biological self-replication. 

The authors of the study found out that the machines can gather hundreds of single cells and assemble them into “baby” Xenobots. A few days later, youngsters evolve to look and move exactly like their parents. This process can repeat over and over again. 

Douglas Blackiston, who is a study co-author and a senior scientist at Tufts University revealed in a statement that for a quite long time people have thought that scientists have worked in order to find out all the ways that life can reproduce or replicate, but the truth is that this was something that has never been observed before. 

What are the millimeter-wide Xenobots assembled from? Well, the answer is quite simple and short –  living cells scraped from frog embryos.

Michael Levin, a biologist at Tufts University and co-leader of the new research, said: 

“They would be sitting on the outside of a tadpole,  keeping out pathogens and redistributing mucus. But we’re putting them into a novel context. We’re  giving them a chance to reimagine their multicellularity.”

An interesting fact is that a Xenobot can produce children, but the system normally dies soon after. With the aim to give the parents a chance to see their kids grow up, the researchers turned to AI.

An evolutionary algorithm was used by the team with the aim to test billions of potential body shapes in simulation.

The system was created and designed to find forms that would be effective for the self-replication method. One of its striking creations resembled Pac-Man. 

Sam Kriegman, the lead author of the new study said: 

“It’s very non-intuitive. It looks very simple, but it’s not something a human engineer would come up with. Why one tiny mouth?  Why not five?”

A Xenobot was then built and its child-rearing skills were tested. 

During the process of work, scientists discovered that the AI-designed parent could use its Pac-Man-shaped “mouth” in order to compress stem cells into a circular offspring.

Then, their children built grandchildren, who built great-grandchildren, who built great-great-grandchildren, and that’s how a Xenobot dynasty was taking shape.

Now, the Xenobots can not only work in groups, self-heal, and even record memories, but now they are even capable of raising a family. 

Although it might seem terrifying for most people, researchers are more optimistic because they believe that their system will develop and be useful not only for the environment, but for poeples’ lives. 

]]>
Who is Ai-Da and Is it possible for a Robot to create artworks and write poetry? https://devstyler.io/blog/2021/11/29/who-is-ai-da-and-is-it-possible-for-a-robot-to-create-artworks-and-write-poetry/ Mon, 29 Nov 2021 10:41:59 +0000 https://devstyler.io/?p=75735 ...]]> What comes to your mind when we talk about artificial intelligence? Well, most people associate it with all the books or movies, where fiction’s presence is really strong. Actually, AI is the reason for all these plots’ existence because precisely AI is the inspiration for all these writers and screenwriters out there who created their books and movies.

On October 23, 2021 an exhibition presented by the organization Art D’Egypte in partnership with the Egyptian Ministry of Antiquities and Tourism, took place at the Great Pyramids of Giza in Cairo, where Ai-Da was a part of the event.

Ai-Da Robot conceived by Aidan Meller. Photo Credits: Frieze

However, are we inclined to associate AI with art or poetry? Well, Ai-Da can create it. She is a highly realistic robot, which was invented by Aidan Mellar in Oxford, central England and she spends most of her time writing poems and creating art.  On Friday she even gave a public performance of poetry she wrote, using her own algorithms in celebration of the great Italian poet Dante.
The event took place at the University of Oxford’s renowned Ashmolean Museum as part of an exhibition marking the 700th anniversary of Dante’s death.

Her poem was inspired and produced as a response to Dante’s ‘Divine Comedy’. Ai-Da’s poem was described as “deeply emotive” by her creator Meller and includes the following verse:

“We looked up from our verses like blindfolded captives,

Sent out to seek the light; but it never came

A needle and thread would be necessary

For the completion of the picture.

To view the poor creatures, who were in misery,

That of a hawk, eyes sewn shut.”

According to Mellar, his robot’s ability to imitate human writing is “so great, if you read it you wouldn’t know that it wasn’t written by a human”. He shared that when AZi-Da was reading her poem on Friday evening “it was easy to forget that you’re not dealing with a human being.”

“The Ai-Da project was developed to address the debate over the ethics of further developing AI to imitate humans and human behavior,” Meller told CNN. “It’s finally dawning on us all that technology is having a major impact on all aspects of life and we’re seeking to understand just how much this technology can do and what it can teach us about ourselves.”

Meller said one key thing he and the team that work with Ai-Da have learned while developing her is that the project hasn’t taught them how “human she is — but it’s shown us how robotic we are as humans.” By watching her behaviour, which was based on the way a real human being behaves in their everyday life, even for a second, you can think that the actual robots actually are us.  Meller also commented:

“Through Ai-Da and through the use of AI, we can learn more about ourselves than ever before — Ai-Da allows us to gain a new insight into our own patterns and our own habits, as we see her imitate them right in front of us.

As we mentioned, Ai-Da can also create artworks which is one of the reasons she is so impressive. She made an artwork for the Dante exhibition titled “Eyes Wide Shut”. It was created in response to an incident in Egypt in October – Egyptian security forces wanted to remove the cameras in her eyes because of their concerns over surveillance and security. Mellar said that the incident showed just how much nervousness there is in the world around technology and its advancements.

Her creator is aware of the concerns over the increasingly advanced development of artificial intelligence but he thinks that “technology on its own is benign — it’s those that control it whose intentions could be morally and ethically questionable.”Mellar said:

“The biggest fear we should have should be of ourselves and the human capability to use technology to oppress, not of the AI itself.”

In his opinion Ai-Da can be a pioneer in the world of AI and her productions will push the boundaries of what can be achieved in technology and will help us to learn more about ourselves thanks to the robot’s eyes.

]]>
Are everyday Robots leaving the lab? https://devstyler.io/blog/2021/11/24/are-everyday-robots-leaving-the-lab/ Wed, 24 Nov 2021 11:50:57 +0000 https://devstyler.io/?p=75396 ...]]> We live in a world where technology is developing really fast. Our everyday life became a spectrum of smart gadgets which help us to work more efficiently and be more productive. We don’t have to worry about going to places where we have never been before because we don’t know how to get there – this is what the GPS navigation is used for.

There’s no use in being upset when our phone battery dies – there is a gadget which can recharge our device wherever we are.  You see, our life is digitized and the only thing we can do is take all the advantages that it offers.

Hans Peter Brøndmo

As we said, technology is developing really fast but if we have to be concrete – we will talk about robots.

Hans Peter Brøndmo, who is a Chief Robot Officer, revealed that in the past few years he and his team worked really hard to see if it is possible to teach robots to perform useful tasks in the messy, unstructured spaces of our everyday lives. He admits that they imagine a world where robots are doing the same things we do in our everyday life like sorting the trash, wiping tables in the cafes and so on. Brøndmo says in a blog post:

“We believe that robots have the potential to have a profoundly positive impact on society and can play a role in enabling us to live healthier and more sustainable lives.”

Hence, after months and years of hard work, Peter Brøndmo announced:

“Today, I’m pleased to share that we have early signs that this is possible. We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices. The same robot that sorts trash can now be equipped with a squeegee to wipe tables and use the same gripper that grasps cups can learn to open doors.”

Hans Peter Brøndmo shares that over the past few years he and his team were focused on building an integrated hardware and software system that is designed for learning – including transferring learning from the virtual world to the real world. He also revealed that their robots are equipped with a mix of different cameras and sensors which help them to take in the world around them.

Hans Peter Brøndmo admitted that it took the equivalent of four months for a robot to learn how to grasp small objects such as keys or toys for example. And it had a 75% success rate even though they didn’t use simulation.

“Today, a single robot learns how to perform a complex task such as opening doors with a 90% success rate with less than a day of real-world learning.”

Are everyday robots really leaving the lab?

Well, according to Brøndmo’s blog post, over the upcoming months, Googlers who work in Mountain View will be given the chance to catch glimpses of his team’s prototypes, wiping tables after lunch in the cafes, or opening meeting room doors to check if the room needs to be tidied. He hopes that robots which he and his team create, will be useful in our physical lives as computers have been in our digital lives. He believes that robots have the potential to be tools, helping us to find solutions for every challenge we face – from finding new ways to live more sustainably, to caring for loved ones.

]]>