Bringing sustainable autonomous driving to India and to the world. This marks the beginning of our trials of autonomous vehicles to bring autonomous transportation to the part of the world no one expected, i.e.,in India.
Currently our city trials are restricted to performing autonomous driving operations at night,an ongoing endeavour to bring safe,scalable,and cost-effective autonomous mobility globally. This demo shows autonomous navigation at night on the city roads in India, in the city of Bhopal.Earlier in the month of October we introduced bidirectional negotiation on single lane roads to the world of autonomous driving -- a capability no other autonomous mobility company has demonstrated. While the technology is almost-ready for city traffic as well, we are staring our operations first with moderate traffic conditions at night.
While the city roads at night are sparsely populated, they have their unique challenges. For example, one can notice an auto coming in from the wrong direction. Similarly, bikes at night can cross the roads in very stochastic manner. Our ability to negotiate bidirectional traffic on single lane roads makes it look like a cake-walk. Whereas, typically,autonomous vehicles that are too much dependent on maps can get stuck very easily in such scenarios --like Cruise and Waymo (who are busy in getting their licenses suspended, with their approach to autonomous driving).
We will keep on bridging the gap between our chaotic traffic autonomy demos during the day on highways and our demos in the city at night, with ability to negotiate very dense night traffic in the coming weeks.Furthermore, with our unique unsupervised learning and reinforcement learning based approach to further enhance the bidirectional negotiation and overtaking policies at Swaayatt Robots ,we will showcase a one-of-its-kind demo in the month of January 2024.
Introducing Bidirectional Negotiation on Single Lane Roads to the World of Autonomous Driving
22 Oct 2023
In this demo, we showcase autonomous driving at high speeds on a single-lane road with bidirectional traffic negotiation. When a vehicle approaches from the opposite end of the road, our autonomous vehicle is required to move to the side of the road.
This is the world's first demo showcasing such negotiation capabilities in the context of autonomous vehicles. We have developed a novel classical motion planning and decision-making algorithmic framework. We had previously demonstrated this framework off-road in September at low speeds. The framework uses multiple horizons for planning and decision-making, interleaving the decisions of multiple agents to allow our autonomous vehicle, Swaayatt Robots (स्वायत्त रोबोट्स), to execute such behavior.
We are working towards scaling this framework through end-to-end deep inverse reinforcement learning, which will be showcased in January 2024.
The autonomous driving industry, predominantly in North America and Europe, focuses on perfect maps-based avoidance with a significant margin of error on broad roads. Faced with scenarios like the ones shown here, they often stop indefinitely or yield. In contrast, our research effort aims to enable autonomous vehicles to make decisions, such as when to yield, when to go off-road, or whether to completely stop and wait for the road to clear. The present demo showcases the beginning of this journey towards Level-5 autonomous driving, including negotiation at high speeds through chaotic, dense, and tight roads.
We executed this demo on the 22nd of October, 2023, on the day of Maha-Ashtami. This marks the beginning of our research on achieving speeds of 101 KM/H, which we will demonstrate in around six months. This represents the first step in the research direction, offering a glimpse of the future of autonomous driving.
Driving where no Autonomous Vehicle has driven before!
22 Sep 2023
This demo showcases autonomous driving in the wild, where our autonomous vehicle was tasked with avoiding generic static and dynamic obstacles in its path -- with the margin of error being very low. At some points, our autonomous vehicle had few tens of centimetre of margin, from the impending obstacles, with very-rough and bumpy terrain ahead of it.For a significant portion of the navigation, the vehicle had to negotiate bi-directional traffic on a single-lane off-road -- although we developed novel motion planning and decision making frameworks to enable this, around January we will showcase this capability at very high-speeds, via a multi-agent intent analysis and bi-directional negotiation policy, learned via deep unsupervised learning + reinforcement learning.This demo made use of very sparse stochastic HD maps, for stochastic projection of boundary information. In the coming weeks and months we will showcase autonomous driving without any HD map off-roads, showcasing the agent building the cost-map of the navigable region via unsupervised deep learning, including building an inherent understanding of the boundaries of the environment, utilizing only GPS-nodes as waypoints.Our research in off-roads autonomous driving in general focuses on achieving the end-goal of enabling autonomous vehicles traverse any terrain without using HD maps, learning a driving policy via unsupervised machine learning and reinforcement learning -- pushing the boundary of what is considered possible in autonomous driving. This will eventually trickle down to on-road autonomous driving, where we will showcase elimination of the key perception algorithms altogether -- road detection, lane detection, terrain-map, and perhaps obstacles detection as well.In the pursuit of achieving Level-5 autonomy we will be doing a 100 KM/H autonomous driving demo, showcasing: (i) Generic bi-directional negotiation at high-speeds on a single lane road, (ii) over-taking and ability to abort like human drivers do, and (iii) high-speed navigation without any maps.As a first step in this endeavor we will be executing a similar demo on a single lane on-road environment at high-speeds, where often to avoid the on-coming obstacle, the vehicle will have to drift off-roads -- utilizing the stack we have built for generic off-roads autonomous driving, which will be soon scaled with unsupervised learning.
Autonomous Driving Demonstration in critical traffic Dynamics in India
31 Aug 2023
In this demo, our vehicle can be seen traversing through dynamic, unpredictable, and tight obstacles while also ensuring soft constraints of its lane and the environment.
Our research at Swaayatt Robots (स्वायत्त रोबोट्स) in several different fronts of theoretical computer science and applied mathematics, including, but not limited to #reinforcementlearning, transfer learning, #deeplearning, #machinelearning, motion planning, SLAM and mapping, computer vision, stochastic processes, manifold alignment, that we have been doing over the years, and demonstrated through 60+ demos over the past few years, is now taking a definitive shape to enable sustainable autonomous driving at scale.
Next will showcase our R&D in enabling mapping at a very large scale via both contextual and symbolic transfer of knowledge across the maps, and also showcase generalized obstacles avoidance off-roads, first via using perception algorithms, and then by eliminating several of the perception algorithms, without any HD maps, via the novel unsupervised learning based decision making framework we have been working on.
We expect to open autonomous driving, and #autonomousvehicles experience to the public in India in the month of October. This will be historic in the context of autonomous driving. Very soon we will execute high-speed autonomous driving on a mountainous terrain as well.
Navigating at night at high-speeds (max speed during the demo 64 KM/H), safely executing lane keeping, merging into a traffic intersection, and safely executing round-about navigation policy.
While the roads over here were a bit well-paved, given our autonomous lane detection and generation algorithm, which is being scaled, for both day and night operations, that automatically generates lane markers on-the-fly in the absence of lanes markers in the visual-sensory data, we are able to tackle both structured- and unstructured-roads (as we had demoed in the past). Now this is being scaled for state-wide and country-wide operations.
Earlier we had demoed off-roads autonomous driving, via #reinforcementlearning, and when that research is merged with our on-roads autonomous driving in the coming weeks, we will have scalable #autonomousvehicles capable of negotiating any kind of roads and environmental situations.
Next we will be showcasing autonomous driving at night through moderate traffic conditions of the Indian cities, and very-soon during the day-time as well.
Autonomous Driving Off-Roads at High Speeds in India (44 KM/H)
08 Apr 2023
Autonomous driving off-roads at high-speeds, a one of its kind of a demo, handling such levels of uncertainty and unstructured off-road.
The core motion planning and decision making in this demo is being governed by a reinforcement learning agent. The vehicle reached a top speed of 44 KM/H, which is near the limit of humans driving safely on this road, without risking drift or side-skids at turns.
At Swaayatt Robots (स्वायत्त रोबोट्स) we have initiated a research project, with the long term goal being, at least for off-roads, to enable the autonomous vehicle make decisions about motion and behavior without relying on explicit computation of the various perceptual features, for example, localization against HFMs or detection of delimiters or obstacles.
This demo showcases a glimpse of the classical variant of the underlying research put into action. In the end, in a few months hereafter, this novel decision making and motion planning algorithmic framework will change the off-roads #autonomousdriving paradigm and how #autonomousvehicles compute actions or make navigation decisions.
The deep #reinforcementlearning variant of this algorithmic framework will completely eradicate the need for explicit perception algorithms, and autonomous vehicles will be able to make a gist of their environment and take behavioral actions, without explicitly detecting road boundaries or delimiters as well as obstacles and computing their intentions. This research will then be utilized on-roads to achieve scalable Level-5 autonomous driving.
This demo used explicit obstacles detection, which worked robustly on-roads, as well as two LiDARs and cameras.
Autonomous Driving at Lal Bahadur Shastri National Academy of Administration
29 Mar 2023
It can be seen that our autonomous driving software, which uses reinforcement learning, enabled our autonomous vehicle to navigate through very-tight regions in the campus. Executing such tight turns required fundamentally innovating the controller algorithm on the fly, within a few hours -- I had to use my several years of mobile robotics and motion planning experience to get this done rightly. Our vehicle had driven all the way from Bhopal to Mussoorie, through rain, dust, and fog, around 1100 KM one way trip. As a result, on the 20th, our braking system and many of the cameras failed to function. Thus, we were not able to showcase our full capabilities on the 20th. However, after much effort, we were able to activate our braking system on the 21st of March, and again did the demo, that is being showcased here. This was an unknown campus to us, and with a few hours of fine-tuning, we were able to transfer our campus autonomous driving software to function at the previously unseen LBSNAA campus. This again highlights the readiness of our campus autonomous driving software to execute autonomous driving anywhere with minimal efforts.
Autonomous Driving through Chaotic Traffic in India via Reinforcement Learning
24 Feb 2023
Autonomous driving through generalized traffic with autonomous parking maneuver via reinforcement learning-based driving policy. We are going to begin executing autonomous driving on urban roads, i.e., city roads with unruly traffic, in India starting March/April, and this demo is a part of a series of demos and trials we will be doing along the way, to both achieve autonomous urban driving and 100 KM/H L-4 autonomous driving in India. In this demo, our autonomous vehicle is tasked with negotiating and avoiding generalized uncontrolled traffic with an end goal of parking at a designated spot with near-end-effector constraints. The complexity of this task is humongous, given the unpredictable pattern of the traffic and pedestrians in an open environment -- there are no traffic rules that can provide insights into the behavior of pedestrians or other vehicles. This was achieved through an autonomous driving policy learned via reinforcement learning. Earlier in 2017 as well, we had demoed a multi-agent reinforcement learning-based framework to perform autonomous navigation through stochastic tight roads in India with generalized traffic. The present demo and the earlier 2017 demo make us the only autonomous vehicles startup to put a full-fledged autonomous SUV on Indian roads.
Autonomous Driving Off-Roads Through Dense Fog in India
04 Feb 2023
Autonomous driving off-roads with very thick fog! This capability makes us the 3rd startup/company in the world to have both on- and off-roads autonomous driving capabilities, after Wayve AI (2018) and Oshkosh Defense. Furthermore, we are the only startup in the world to demo autonomous driving capabilities in India and actively pursuing Level-5! The ongoing DARPA RACER program's objective is to drive off-roads at human-driven speeds. We acknowledge the work, specifically by University of Washington's team, where they are building an explicit cost-map to drive off-roads. Our work is more unsupervised. We are perfecting the off-road autonomous driving policy, including optimal speed control over off-roads, without any prior map of the environment, and without any explicit algorithm for processing of the visual sensory information, and we should be able to demo this capability in the coming weeks in the form of a PoC. This demo utilized one VLP-32C LiDAR from Velodyne. Out of the 25+ trials we did, there was one point where manual intervention was felt necessary, as this was a single-lane road with bi-directional traffic. This kind of negotiation is an active area of research at Swaayatt Robots (स्वायत्त रोबोट्स), and we are learning a policy to solve this problem.
Autonomous Driving on Urban and Sub-Urban Roads in India
22 Jan 2023
At Swaayatt Robots, we have initiated autonomous driving trials on city roads in India during the nighttime, and our plan is to expand these trials to daytime operations by Q2-2023, even amidst the chaotic traffic. During these trials, we will be showcasing the following: 1. The decision-making capabilities of our reinforcement learning-based autonomous driving policies. 2. Navigation capabilities with stochastic localization against high-fidelity maps. 3. High-precision localization against high-fidelity maps for last-mile autonomy. 4. Perception techniques using both LiDAR and non-LiDAR based approaches. These demonstrations represent significant steps in advancing our autonomous driving technologies. Furthermore, our autonomous driving trials on city roads in India serve as a testament to the adaptability of our technology in diverse and complex real-world environments. Navigating through the bustling streets, our autonomous vehicles showcase their ability to respond to unpredictable scenarios, ensuring safety and efficiency even in challenging traffic conditions. As we transition to daytime trials, our commitment to pushing the boundaries of autonomous driving remains unwavering. We envision a future where our technology not only enhances convenience and safety on the roads but also contributes to the evolution of urban mobility, making transportation more sustainable and accessible for all. With continuous innovation and rigorous testing, we are excited to bring the promise of autonomous driving closer to reality.
Autonomous driving navigating through tight obstacles validating two of the motion planning algorithms that we are furthering at Swaayatt Robots, and for which will be submitting three papers, two of those perhaps in Robotics Science and Systems conference, and filing provisional patent for the one. This demo required integration of active and passive perception algorithmic pipelines, and by the end of 2022, we will finish most of the campus autonomous driving software, with parking feature, which will go on and become the first commercial offering from Swaayatt. In this demo, one of the motion planning algorithms, that is being furthered with reinforcement learning, is the one I originally invented when I was an undergrad at IIT Roorkee. It is the first time we are testing it on autonomous vehicles as it guarantees safety mathematically for mobile robots exhibiting kino-dynamics, such as a campus vehicle navigating at manageable speeds. The second is an algorithm that I developed with Zvi Shiller during my internship under him, and have significantly improved it in 2015. One of the versions of this algorithm was published in the International Journal of Robotics Research in 2013, and hopefully, being furthered by reinforcement learning, we will be able to submit a revised version of the same. This demo is also a step towards executing a major demo around Diwali, and ensuring that even at high speeds, the vehicle can properly slow down and avoid cluttered environments without diverging too much from its lane, and not stop too much unnecessarily, as most of the autonomous vehicles do in the US and in Europe -- which is not a proper use of the capabilities of autonomous vehicles.
Autonomous Driving On Indian Highways: ADAS Feature Demo
01 Jan 2023
a challenge we have taken on to develop an affordable system, consequently limiting the vehicle's speed. For the product launch, we ma For the second time now, we have commenced comprehensive testing of our reinforcement learning-based ADAS (Advanced Driver Assistance System) and autonomous driving software on uncontrolled roads in India. Back in 2017/18, we had previously demonstrated the effectiveness of the RL-first approach to autonomous driving on the unpredictable roads of India. Currently in the prototyping stage, the software being showcased in this demo will enable ADAS, lane-keeping, and other L2 autonomous driving features on roads as unstructured as those found in India, all without the need for HD Maps. In this demonstration, the vehicle's core lane-keeping operation relies solely on a single forward-facing GigE-IP camera, out of the six cameras mounted on the roof-top. Our lane detection and generation software, which has been demonstrated multiple times since 2017, enables autonomous vehicles to both detect and generate lane boundaries on-the-fly, actively perceiving the drivable region. A reinforcement learning-based planning and behavior generation software then assumes control of the vehicle. Depth perception is achieved using only a single camera,y opt to utilize a stereo camera system.
This demo showcases our autonomous driving software in the prototyping stage. When productized at Swaayatt Robots, it will enable (i) campus autonomous driving, (ii) autonomous forward and reverse parking with near end-effector constraints satisfaction, (iii) execution of tight S-Curves, and (iv) U-Turns, with intelligence to decide which spot to park at, based on the clutter and the contextual understanding of its environment. In this demo the environment is divided into 3-zones, each zone having some drive lanes and some park-lanes. The core motion planning software plans an optimized trajectory to meet the end-effector constraints and has to decide on the fly which divider passageway is to be selected to transition to the next zone. The planning is online, as opposed to off-line/global planning approach adopted by the industry. Next we would be blending this with highly optimized #reinforcementlearning (RL) based decision-making framework, using Semi Markov Decision Processes (S-MDPs), that would automatically be able to decide whether to stop or not for other vehicles, and when to initiate reverse direction navigation depending on the contextual understanding of the environment. One of my core focuses since undergrad is autonomous navigation in unknown and unseen environments via reinforcement learning. Thus, the learned RL-agents' policies will be independent of the environment / simulation data it is trained in, and would be able to work in any campus environment, with stochastic localization against high-fidelity maps (like GPS localization) in such environments. Further, to scale-up last mile autonomy, in this demo we also show the wire-frame high-fidelity map of the environment, which is useful for such last mile #autonomousmobility tasks.
Autonomous Driving: Executing Near-Drift Speed U-Turns and Lane Change Maneuvers
28 Jul 2022
Autonomous driving, executing U-Turns at high-speeds, i.e., near-drift speeds, and lane-change operations, via reinforcement learning. This video demonstrates Swaayatt Robots (स्वायत्त रोबोट्स) autonomous driving technology enabling (i) high-speed maneuvers, (ii) lane-keeping, and (iii) lane-shift operations. The core navigation policy for the RL agent, which is executed by our autonomous vehicle, was majorly learned in simulations and transferred to the real-world. The vehicle entered the U-Turn at 45 KM/H (which is near-drift speed given the tight-turn and the vehicle's dynamics) and executed it precisely, and thereafter performed a lane switching maneuver to reach its target parking stop. This demo was also performed to machine learn and iteratively-calibrate our system, for the 100 KM/H autonomous driving (which we will be executing within the next 7 months), where the vehicle will have to execute many such high-speed turns on mountainous terrain.
Swaayatt Robots' Autonomous vehicle was featured on Zee Hindustan news channel . In this video, CEO Mr. Sanjeev Sharma is demonstrating the autonomous driving of their vehicle.
Reinforcement Learning based Motion Planning for Autonomous Driving
24 Oct 2022
Here we showcase behavior of our autonomous vehicle,
(i) When it is unconstrained, avoiding head-on the approaching vehicles at considerate speeds, in its lane, in the campus, and
(ii) When it is constrained with rules where it uses stop-verify-proceed mode, waiting for vehicles coming directly in its navigation lane -- typically what is done in US and European settings
These demos also showcase the #reinforcementlearning based #autonomousdriving and decision making research I had been doing earlier, and what we are continuing at Swaayatt Robots (स्वायत्त रोबोट्स) for achieving Level-5 autonomy in near future.
In all these experiments the field of view of the vehicle for decision making was limited. This was done to test the behavior of the algorithms against sudden unforeseeable obstacles / pedestrians / animals jumping on the current lane of the vehicle at high speeds.
The core motion planning algorithm currently uses two reinforcement learning agents. The full algorithmic framework uses 5 RL agents, which will be showcasing shortly in October and November on Highways, and mountainous roads, along with showcasing a complete end-to-end autonomous driving software package for campus #autonomousvehicles.
This demo used one LiDAR and 4 forward facing cameras.
This is reinforcementlearning put into enabling autonomous driving, a small proof-of-concept showcasing that the future of autonomousdriving is learning using human demonstrations + random-walk, and rather than imitating a human-expert we should use the demonstrations as a baseline.
The future of autonomous driving is going to be a lot less dependent on individual algorithmic pipelines, and thus a lot less dependent on the requirement of labeled data.
This demo showcases our technology pipeline, which is going to be productized for campus and parking lot navigation tasks.
The motion planning and decision making frameworks use reinforcement learning (both random walk and human demonstrations) to learn navigation-policies. The foundation of this research is my research in motion planning in unknown environments via reinforcement learning, that I started way back in 2009, when I was an undergrad, and subsequently published 3 papers on this work.
Our autonomousvehicle in this demo: 1) Is not using any information beyond a 7m Radius (almost twice the length of the vehicle), and 120-degrees forward field-of-view -- this changes for reverse direction motion where rear cameras are used.
2) Is not using a map for any decision making or motion planning task. The map is only to project where the vehicle is to know the behavior of the overall planning framework.
3) The RL agent is guiding the vehicle to meet end-effector-constraints at the goal location -- an important task for parking. Meeting end-effector-constraints for general outdoor mobile robots is a challenging task for any online motion planner -- traditionally off-line global motion planners are used for such tasks (or perhaps special kino-dynamic algorithms like bidirectional RRT/RRT*).
Going forward, we are going to eliminate explicit perception pipeline and algorithms for many of the autonomous driving tasks. Most prominent of which is going to be off-roads driving.
This demo also showcases our controller, which is being learned end-to-end for steering stabilization. Traditional approaches first plan a time-parametrized trajectory, and then take a controller (PID, Pure-Pursuit etc.) to then follow a trajectory.
Our controller is more stable and will be more reliable than traditional MPC controllers, and is going to have an inherent safety-check -- taking dynamics, drift, and toppling constraints into account, all going to be unsupervised machine learned end-to-end. For high-speed navigation, this is going to further be an integral component of the decision making & motion generation RL agent, without requiring an explicit agent for the task.
Engaging in autonomous test driving on our designated track TR1, this video captures the inaugural trial run on our meticulously engineered custom test circuit. The primary objective behind these initial tests is to intricately calibrate our steering controller, rigorously evaluate the performance of our motion planner, and scrutinize the efficacy of our decision-making and prediction algorithms. These endeavors take place amidst an intentionally induced environment of highly chaotic traffic conditions strategically created on TR1.
The core essence of this demonstration lies in the seamless fusion of cutting-edge technologies. The steering controller calibration process takes center stage, serving as the linchpin that ensures optimal control precision throughout the vehicle's journey. The motion planner, a key component of our autonomous system, undergoes rigorous assessment, exhibiting its ability to seamlessly navigate through the dynamically evolving complexities posed by the track's intricate layout.
Furthermore, the decision-making and prediction algorithms, crucial elements driving the vehicle's adaptability, are rigorously tested. These algorithms are scrutinized within the context of the deliberately chaotic traffic conditions that TR1's unique configuration facilitates.
Swaayatt Robots: Autonomous Driving and Steering Calibration Test
08 Jul 2018
This video showcases a demonstration of our autonomous vehicle's nocturnal navigation within a campus environment. Notably, in the capous the calibaration of the steering with the vehicle itself.
Within this demonstration, our autonomous vehicle adeptly maneuvers through the campus terrain under the cover of darkness. The technical focal point lies in the meticulous calibration of our steering controller, a sophisticated component meticulously fine-tuned to intricately account for the vehicle's velocity.
This calibration process, a pinnacle of technical precision, ensures that the steering controller's response remains optimally synchronized with the varying speeds at which the vehicle operates. As the vehicle's velocity fluctuates, the steering controller seamlessly modulates its output to guarantee the utmost accuracy in maintaining the intended trajectory.
Swaayatt Robots: Smooth Steering Controller and Campus Motion Planner
17 Jun 2018
This video demonstrates the testing of our smooth steering controller in combination with our campus motion planning algorithm. During this test, we evaluate the performance of the steering controller working alongside our motion planning algorithm on our campus. We are assessing how effectively the steering controller handles various scenarios and navigates the vehicle smoothly in our campus environment.
This video is a live demonstration of our vehicle's autonomous navigation skills, particularly when traversing through exceedingly confined spaces. Notably, this instance represents a preliminary test run, conducted as part of our rigorous assessment process.
Typically, these comprehensive test runs are meticulously executed at Swaayatt Robots. Their primary purpose revolves around the intricate calibration and evaluation of various controller parameters. This intricate calibration process is essential to fine-tune the nuanced intricacies of our system, ensuring optimal performance within the challenging context of navigating tight spaces.
The underlying technical complexity of this feat cannot be understated. The autonomous navigation through such constrained environments requires a harmonious fusion of precision control, sensor interpretation, and real-time decision-making. Each movement is orchestrated with finesse to negotiate the intricate pathways, all while avoiding potential obstacles and maintaining an unerring trajectory.
Motion Planning in Unknown Environments for Self-Driving Vehicles
13 Oct 2017
A Reinforcement Learning and Convex Optimization based motion planning algorithm is being demoed for cruise mode navigation on highways, with no apriori map of the environment or the obstacles' configurations. The task of the motion planner is to plan high speed trajectories with very limited field-of-view, avoid collisions with unforeseeable obstacles and do lane keeping whenever possible. The motion planner is also capable of ensuring convergence to the goal in case of navigation in completely unknown environments, but this capability is not being demoed in this video.
Sparse-HD Maps with High-Dimensional Feature Layer for Autonomous Driving
15 Jul 2022
Sparse High-Definition (S-HD)map, with integrated multi-dimensional feature layer, of a campus environment, demoing Swaayatt Robots (स्वायत्त रोबोट्स) mapping technology for both (i) generic and (ii) last-mile autonomous driving. This is another milestone achieved in terms of commercial scaleup of our autonomous driving technology -- producing S-HD Maps with multi-dimensional feature layer. The multi-dimensional feature layer, part of the S-HD Maps, allow autonomous vehicles to know where they are in the environment with respect to various delimiters -- thereby allowing parking and other last-mile autonomous mobility functionalities.
Sparse High Definition Maps (S-HD Maps) for Autonomous Driving in India
14 Jul 2022
Demo of our technology for building Sparse High-Definition (S-HD) maps of roads in India for autonomous driving. The demo was done using a single LiDAR, mounted on our autonomous Mahindra Bolero vehicle. Currently Swaayatt Robots (स्वायत्त रोबोट्स) is the only company building maps in India for autonomous driving applications, including dense HD maps for last mile autonomous mobility. Soon we will be able to offer such maps as a service in India. Since we require only GPS localization over the maps, and precise global localization is not needed for our autonomous driving technology to function, we can deal with stochasticity, or noise, in the environmental representations in the maps. This is where S-HD maps play a critical role. S-HD maps have huge advantage, as they can be quickly built, requiring significantly less maintenance, compared to the dense HD maps.
Precise Localization of Autonomous Vehicles in Cluttered Environment
24 May 2022
This video serves as an impactful showcase of our prototype technology, ushering in a new era of autonomous vehicle localization within complex and cluttered environments situated in India. This demonstration underscores our commitment to advancing the frontiers of autonomous navigation through cutting-edge technological solutions.
Sparse High-Definition (S-HD) POINT-ClOUD map, with integrated multi-dimensional feature layer, of a campus environment, demoing Swaayatt Robots (स्वायत्त रोबोट्स) mapping technology for both (i) generic and (ii) last-mile autonomous driving. This is another milestone achieved in terms of commercial scaleup of our autonomous driving technology -- producing S-HD Maps with multi-dimensional feature layer. The multi-dimensional feature layer, part of the S-HD Maps, allow autonomous vehicles to know where they are in the environment with respect to various delimiters -- thereby allowing parking and other last-mile autonomous mobility functionalities.
DGN-I: Autonomous Driving Perception on Urban Roads
15 Apr 2021
Demo of Swaayatt Robots (स्वायत्त रोबोट्स) perception algorithmic framework enabling road, obstacles and traffic-sign detection. This framework is going to be a part of our flagship product, AutonomousOne, to enable ADAS and autonomous driving (2023), both in India and abroad. This framework is at around an order magnitude computationally efficient compared to the state-of-the-art deep learning models, and will therefore consume significantly much less energy from the vehicles. The road detection (via semantic segmentation) model in standalone form consumes only 13.75 GFlops (in this demo), compared to few 100 GFlops by state-of-the-art models. Going forward we will be integrating cues from our proprietary maps to achieve near-ground-truth results, as we scale all over India. For the combined task, we will be achieving it in <30 GFlops, including both detecting and generating lane delimiters on-the-fly -- typical state-of-the-art models will consume 200+ GFlops for such a task!
DGN-I: Semantic Segmentation and Obstacles Detection (Short and Far Range)
09 Mar 2021
At PoC level we now have the capability to perform semantic segmentation and obstacles detection using only the cameras. This research, and the technology, will be extended to night vision as well. Demo of our multi-objective deep neural network, DGN-I, which performs simultaneous segmentation and obstacles detection, on urban roads in India, in the city of Bhopal. DGN-I uses special computational units, that Sanjeev Sharma developed in 2016, and have been part of most of our DNNs, allowing it to have very low computational demands with high accuracy. This demo consumes 15.65 GFlops for simultaneous:
Semantic Segmentation for Road Detection
Obstacles Bounding Box detection
It is a highly computationally efficient deep learning system, with practically acceptable accuracy, either in joint or standalone operational modes. Typical DNNs consume 140-675 GFlops for obstacles detection, and 100s of GFlops for semantic segmentation.
Obstacles Detection in Densely Cluttered Environments in India
05 Mar 2021
This demo consumes 15.66 GFlops for DGN-I per-frame, for simultaneous: Semantic Segmentation Obstacles Bounding Box Detection In the perception department, by the end of the year, we will have, at PoC-level, achieved the capability to run most of the algorithms with very high accuracy in one city, in India. Thereafter, we will rapidly scale-up our operations for both India and abroad. The tough data, and the capable algorithms, will help in achieving safety guarantees when deployed in structured environmental conditions in North America and EU. One shortcoming with the existing #deeplearning approaches at large is that most of the networks vary significantly with the tasks, at both the architecture and compute levels. In typical #autonomousdriving or #adas applications, when algorithms need to be embedded on a compute platform, along with computations, the memory, at all levels, also scales linearly. The research I started back in 2016 focused on multi-objective deep neural frameworks, and DGN-I is an instance of this research. Even when networks are not multi-objective, our networks usually have 50-60% architecture and compute overlaps, making the memory requirement scale sub-linearly with the number of tasks in typical ADAS / Autonomous Driving application. This is the case with two of our prominent frameworks, Semantic Segmentation (DGN) and Lane Detection and Generation algorithm (LDG), very different tasks, but significant overlap.
Semantic Segmentation and Obstacle Detection in Densely Cluttered Environments in India
04 Mar 2021
Semantic segmentation and obstacle detection are performed using only cameras. This research and technology will also be extended to night vision. Here's a demo of our multi-objective deep neural network, DGN-I, which simultaneously handles segmentation and obstacle detection on urban roads in #India, specifically in the city of Bhopal. DGN-I utilizes special computational units that Sanjeev Sharma developed in 2016, which have been integrated into most of our deep neural networks (DNNs). These units enable it to achieve high accuracy with low computational demands. This demo consumes 15.65 GFLOPs for simultaneous:
Semantic Segmentation for Road Detection
Obstacle Bounding Box Detection
Our deep learning system is highly computationally efficient, delivering practically acceptable accuracy in both joint and standalone operational modes. In contrast, typical DNNs consume 140-675 GFLOPs for obstacle detection and hundreds of GFLOPs for semantic segmentation. As we prepare for a 100 KM/H #autonomousdriving test in India, we will test our algorithms in much denser and cluttered environments. This will ensure that the system remains robust on highways, mountains, and single-lane non-urban roads, which are expected to be sparser environments.
DGN-I: Under Varying Lighting Conditions - Road and Obstacles Detection
04 Mar 2021
DGN-I operates effectively in varying lighting conditions. Some of our older cameras, purchased in 2014 (approximately $59 each), produce images with blur and aliasing effects. In direct sunlight, the image quality degrades even further. Here's a robustness test and demonstration of our DGN-I, a multi-criterion network capable of simultaneously performing Semantic Segmentation and Obstacle Detection. The current demo consumes 15.27 GFLOPs per image. This represents roughly a 10x improvement in computational efficiency compared to typical state-of-the-art semantic segmentation DNNs and a 10-40x improvement compared to Yolo, EfficientDet+Net, and MaskRCNN. It's worth noting that in direct sunlight, the lower left and right edges of the roads appear very blurry. Additionally, even under good lighting conditions, with the camera used, aliasing effects are present, which can make boundary delineation challenging when real-world boundaries are clear.