Artificial Intelligence for Business: Autonomous Vehicles for Consumers
Electric vehicles are the AI-empowered future of mobility. Some of the applications of autonomous vehicles include usage for shared cars and robotaxis (Uber or Lyft, GM, Waymo), application in automated delivery services (currently being tested by Amazon), using vehicles for employee transportation (Apple), cost-saving on human labor with all transportation tasks, and improved experience and safety in consumer transportation (BMW, Audi, GM, Nissan, Tesla and many more).
This paper will concentrate on autonomous vehicles for consumers in order to replace traditional cars over time relying on the example of Tesla. Automotive AI market size is expected to exceed $27 bln. by 2025, with 48% CAGR growth (Deloitte, 2018). As indicated by ABI research (2018), there will be almost 8 million self-governing or semi-self-sufficient vehicles out and about by 2025. While roughly 93% of all traffic accidents are due to human error, we can explore the applications of AI to make driving safer, easier and more environmentally friendly (English, 2020). Tesla claims their cars covered more than 2.3 billion miles of distance driven on Autopilot. Most of the modern new vehicles in the market have some level of ADAS (Advanced Driver’s Assistance System) inbuilt into them.
The Society of Automotive Engineers (SAE) has recognized six phases of driving computerization, going from nothing (totally manual) to five (completely mechanized) (completely self-ruling). The US Department of Transportation has embraced these guidelines (Deloitte, 2018):
Level 0 is the lowest level (No Driving Automation)
The majority of automobiles on the road today are Level 0: manually operated. While there could be mechanisms in place to support the driver, the “dynamic driving mission” is undertaken by humans. The emergency braking system, for example, does not count as automation because it does not “drive” the car.
Level 1 (Driver Assistance)
This is the least degree of mechanization. The vehicle includes a solitary computerized framework for driver help, like guiding or speeding up (voyage control). Versatile voyage control, where the vehicle can be kept at a protected distance behind the following vehicle, qualifies as Level 1 in light of the fact that the human driver screens different parts of driving like guiding and slowing down.
Level 2 (Partial Driving Automation)
This implies an advanced driver assistance system or ADAS. The vehicle can handle both controlling and speeding up/decelerating. Here the computerization misses the mark concerning self-driving on the grounds that a human sits steering the ship and can assume responsibility for the vehicle whenever. Tesla Autopilot and Cadillac (General Motors) Super Cruise frameworks both qualify as Level 2.
Level 3 (Conditional Driving Automation)
The bounce from Level 2 to Level 3 is generous from a mechanical viewpoint, however inconspicuous if not irrelevant from a human viewpoint. Level 3 vehicles have “natural discovery” capacities and can settle on educated choices for themselves, for example, speeding up past a sluggish vehicle. But―they actually require human supersede. The driver should stay ready and prepared to take control if the framework can’t execute the errand.
Right around two years prior, Audi (Volkswagen) declared that the up and coming age of the A8―their lead sedan―would be the world’s first creation Level 3 vehicle. Furthermore, they conveyed. The 2019 Audi A8L shows up in business vendors this Fall. It highlights Traffic Jam Pilot, which joins a lidar scanner with cutting edge sensor combination and preparing power (in addition to work in redundancies should a part fall flat).
In any case, while Audi was building up their wonder of designing, the administrative cycle in the U.S. moved from government direction to state-by-state commands for independent vehicles. So for now, the A8L is as yet named a Level 2 vehicle in the United States and will transport without key equipment and programming needed to accomplish Level 3 usefulness. In Europe, in any case, Audi will carry out the full Level 3 A8L with Traffic Jam Pilot (in Germany first).
Level 4 (High Driving Automation)
The critical distinction between Level 3 and Level 4 robotization is that Level 4 vehicles can intercede if things turn out badly or there is a framework disappointment. In this sense, these vehicles don’t need human cooperation as a rule. In any case, a human actually has the alternative to physically abrogate.
Level 4 vehicles can work in self-driving mode. Yet, until enactment and foundation advances, they can just do as such inside a restricted zone (normally a metropolitan climate where maximum velocities arrive at a normal of 30mph). This is known as geofencing. In that capacity, most Level 4 vehicles in presence are designed for ridesharing.
Level 5 (Full Driving Automation)
Level 5 vehicles don’t need human attention. Level 5 vehicles will not have controlling wheels or speed increase/slowing down pedals. They will be liberated from geofencing, ready to go anyplace and do whatever an accomplished human driver can do. Completely self-governing vehicles are going through testing in a few pockets of the world, yet none are yet accessible to the overall population.
While the eventual fate of self-sufficient vehicles is promising and energizing, standard creation in the U.S. is as yet a couple of years from anything higher than Level 2. Not as a result of mechanical ability, but since of safety — or the scarcity in that department.
Relevant Vocabulary
The term self-driving is regularly utilized conversely with self-ruling. A self-driving vehicle can drive itself in a few or even all circumstances, yet a human traveler should consistently be available and prepared to take control. Self-driving vehicles would fall under Level 3 (restrictive driving robotization) or Level 4 (high driving mechanization). They are liable to geofencing, dissimilar to a completely self-ruling Level 5 vehicle that could go anyplace.
Popular autonomous driving abbreviations include:
- ADAS: Advanced driver assistance systems;
- LIDAR: Light detection and ranging (through sensor);
- HMI: Human-machine interface;
- HUD: head-up displays;
- SoC: system on chip;
- CPU: central processing unit (1.5 giga operations per second);
- GPU: graphics processing unit (17 giga operations per second);
- NNA: neural network accelerator chip (2100 giga operations per second, introduced by Tesla) (English, 2020).
Necessary Hardware and Software Technology in Autonomous Vehicles
Hardware for Autonomous Cars
One of the most important parts is the system-on-chip. Chips are small elements that guarantee the operation of all computers inside autonomous vehicles. As an example, Tesla partners up with Samsung to produce a 5-nanometre semiconductor, and this technology is available only in two companies of the world as it is hi-tech expertise-driven (Schmidt, 2021). The chips are made from silicone, and Tesla makes sure their cars maximize the silicon performance-per-watt. Special programs and drivers help to communicate with the chip, and there is an emphasis on performance and energy saving. As a result, Tesla’s new generation of Autopilot has 40X increase in processing power compared to the previous generation onboard computers which also indicates there is very fast development of the hardware in this area.
Autonomous vehicles also rely heavily on data collection from the cameras, ultrasonic sensors, radars and other onboard tech elements. Graphic output provided instantly on a display inside the autonomous vehicle is another integral component. Self-sufficient vehicles make and keep a guide of their environmental factors dependent on an assortment of sensors arranged in various pieces of the vehicle. Tesla develops their own video processing tools built on neural networks to ensure advanced reliability of the incoming visual data.
Radar sensors screen the closeby situation for vehicles. Camcorders identify traffic signals, read street signs, track different vehicles, and search for walkers. LIDAR sensors ricochet beats of light off the vehicle’s environmental factors to quantify distances, distinguish street edges, and recognize path markings. Ultrasonic sensors in the wheels recognize controls and different vehicles when leaving. Actuators translate electronic signals into mechanical actions of the vehicle. Due to the heavy dependence on the surrounding factors, autonomous vehicles collect the outside data with radio-based and sensor communication. Radio systems are responsible for analysing the traffic elements such as tolls, signs and traffic lights, as well as exchanging data with other vehicles.
Image processing is an integral part of the vehicle’s navigation relative to the moving objects such as other vehicles and pedestrians. NVIDIA is the key player in the field providing graphics processors that enable Level 2+ automated driving systems. For example, such vehicles can change lanes on their own or make the necessary highway exits and entrances based on the visual input from the outside. NVIDIA DRIVE AGX Xavier can conduct up to 30 trillion operations per second thanks to six powerful processors including GPU, Deep Learning Accelerator, CPU, Stereo flow accelerator, Image Signal Processor, and Vision Accelerator. It is recommended for Level 3 automated driving. If two Xaviers are combined and a special Turing GPUs are added, the platform is ready for Level 4 and 5.
Owning a private autonomous vehicle is similar to owning a smartphone: it is still powered by electricity and the Internet. Autonomous vehicles also need infrastructure hardware in order to operate successfully. Mobile Internet coverage is another semi-hardware essential part of autonomous vehicles operations.
That is why charging stations, public or private, with the appropriate speed of charging and cost of service are necessary to support the autonomous driving ecosystem. There are roughly 40,000 electric charging stations in the US, which is only 25% of gas stations. The speed of charge still depends on the individual compatibility between the car and the charging cable. Current Tesla supercharger speed is 1,000 miles per hour. On the other hand, timely change and recycling of batteries are also a requirement for autonomous vehicles as the batteries need to be replaced every 4 years on average.
There several important considerations for the future development of the hardware elements necessary for autonomous vehicles to advance. First, the real-time architecture with a hierarchy of sensors and actuators should increase its redundancy. Second, the speed of data acquisition and processing will become a key factor, so cameracoders and displays will both advance into high resolution. The electronic support for them will only become more sophisticated with numerous nodes, links, assemblies and cables. The data collection for autonomous driving might remind of a heavy traffic highway where large streams of data move in one direction or different. The bandwidths for distributed network structures and point-to-point data pipes will require increase to meet the requirements of this demanding data-driven ecosystem. For comparison, Level 5 autonomous vehicles should be able to send and exchange 25 GB of data with the cloud on an hourly basis.
In addition to the speed of data collection, it should also be high in both quality and reliability. the vehicle’s connectivity to the outside information should be integrated with extra factors such as road ice, bad weather conditions, traffic jams or accidents, etc. The synthesis of engineering and materials science will let vehicles take advantage of the best available materials for electromechanical connectivity and contact physics peculiarities.
When it comes to autonomous driving, safety is one of the most important factors as human lives are involved. As the pressure for data requirements grow, the electrification of cars becomes inevitable. That is why the optimal physical layer data transportation and both data and physical protections are essential for the technology evolvement. One of the options is fiber optic communications that offer several advantages: less signal distortion, zero electro-magnetic emissions, and no crosstalk. If we unite high-power electric cables with optical communication systems, we can ensure safe and uninterfered functioning. Due to the growth of interlinks and cables, there is a tendency for all actuators and sensors to become smaller.
Onboard and Offboard Software for Autonomous Cars
The onboard software includes high-definition maps and GPS, localization and environmental mapping, vehicle operating systems, and the overall supervision platform. The offboard software is the data center and cloud operation to store and update algorithms and maps. Data capturing from the censors is processed with the help of low-level, fast and memory-efficient code. Then it is visualized on the consumer screens and used to control the processes.
Tesla’s autonomy algorithms output over 10,000 distinct predictions at each time period. The reliable representation of the environment is possible due to trained neural networks that are establishing ground truth through the car’s multiple cameras, radars and sensors.
Data for self-driving cars is collected through censors, cameras, radars and then integrated with the data from maps, GPS and historical data collected by AI (retrieved from the cloud). Due to the unpredictability of the environment, the vehicle computer needs to instantly build the map of this environment and localize itself within it. Passive sensors detect light and radiation as a reflection from objects in space, while active sensors
The outside of the sensor is separated into pixels, every one of which can detect the force of the sign got, in light of the measure of charge gathered at that area. By utilizing numerous sensors that are touchy to various frequencies of light, shading data can likewise be encoded in such a framework (Nasseri et al, 2020).
Dynamic sensors have a sign transmission source and depend on the rule of time of-flight (ToF) to detect the surroundings. ToF estimates the travel estimate of a signal from its source to an objective, by expecting the signal to recauchette back.
Ultrasonic sensors (known as SONAR; SOund NAvigation Ranging) use ultrasound waves for and are by a long shot the most seasoned and cheapest of these systems. RADAR (RAdio Detection And Ranging) utilizes radio waves. Radio waves transmit at the speed of light and have the longest frequency of the electromagnetic range. LIDAR (LIght Detection And Ranging) utilizes light as a laser. LIDAR sensors convey 50,000–200,000 laser pulses each second to cover an environment and unite the returning signs into a 3D map. These sensors have generally been the most expensive, with costs of $10k for the famous car top mount with 360 degree overview (Nasseri et al, 2020).
Regression algorithms help identify and localize objects and predict movements by comparing variables for this forecast. Decision-making mechanism uses several models to recognize scenarios and carry out necessary tasks. Pattern recognition algorithm is used to recognize objects by reducing the data set through commonalities and patterns. Clustering algorithms are used for the same task with the help of dividing all objects into clusters based on common features. Each new algorithm can be evaluated towards the entire Tesla fleet (Tesla Website, 2021).
This set of tools helps autonomous vehicles to function independently in regular situations and find a good solution to the uncommon scenarios. Some of the issues associated with unpredictable factors include the irregular behavior of pedestrians, animals or other vehicles (difficult to react to and easy to misinterpret), perception complexities associated with weather (LIDAR sensors can’t collect data for the software during heavy fog), vulnerability to cyberattacks and unsolicited remote control.
AI software can also be used in autonomous vehicles to monitor and control the inner vehicle operations. One of the examples is the recent BlackBerry signaling system developed for autonomous vehicles: it informs on the need to refuel the car regardless of the fuel type, batteries, diesel, or gasoline (Schaffer, 2021).
Refined programming at that point measures this tangible information, plots a way, and sends directions to the vehicle’s actuators, which control speed increase, slowing down, and controlling. Hard-coded rules, hindrance shirking calculations, prescient displaying, and item acknowledgment assist the product with adhering to traffic leads and explore deterrents.
In 2020, Tesla issued their innovative Full Self Driving system that is intended for next generation autonomous cars. It is possible due to the 4D sensor system that offers better predictions of obstacles (both dynamic and stable ones), trajectories and navigation. Samsung has already been Tesla’s chip partner for several years providing 14 nanometer chips produced with the argon fluoride exposure process. The new chips imply the advanced extreme ultraviolet. It’s a high tech product offered only by two companies in the world. The chips empower the neural network processing units, processors, memory, display driving and security circuits. It all leads to the higher level of self-driving when the vehicle can collect the outside information through sensors, the inside information from the driver’s input and provide its output onto the screen. The technology will be even more influential when 6G connection becomes global.
Real Business Example: Tesla
In 2020 Tesla reached $721 million in profit on $31.5 billion in total revenue.
In the case of Tesla, the benefits of AI are centered around the experience of self-driving: navigation, reaction, voice controls, and many more. Additionally, electric cars have no need for gasoline or diesel — drivers only need to charge vehicles’ batteries. Cut costs of gasoline, as well as a much smaller carbon footprint attracted more than a million customers in spite of the relatively high Tesla vehicle price. Annual saving can go up to 40% as compared to regular cars (Deloitte, 2018).
What sets Tesla apart from other electric vehicles is their integration of cameras, radars and sensors for self-driving technology. Each of the 8 in-built cameras has their own deep neural network with constant raw image semantic analysis, distance and proximity estimation, and object identification. Tesla accumulates the diverse driving scenarios from all over the world sourcing data from over 1 mln vehicles. Cameras guarantee 360-degrees outlook and visibility for up to 250 meters. Ultrasonic sensors increase this distance twice allowing the vehicles to identify hard and soft objects. Tesla’s Autopilot system utilizes 48 neural networks, and the neural net for vision, sonar and radar software allows not just looking into all directions simultaneously but also reaching angles that would not be accessible to human eyes.
Tesla does not manufacture fully autonomous vehicles right now but they do offer many of the ADAS features inside their cars. The controversial Autopilot system allows the vehicles to conduct automatic braking, acceleration and steering, whenever these functions are enabled by the driver. Autopilot requires active driver supervision. Nevertheless, the system is capable of suggesting lane changes for route optimization, automatic steering towards highway exits or interchanges, and making adjustments to avoid slower vehicles in the traffic. Autosteer+ makes it easier to navigate tight and complex roads, and Smart Summon adds better maneuvering in parking lots.
The experience starts with voice recognition when the driver states the destination. Otherwise, the vehicle analyzes the driver’s calendar for a preferable destination, or simply drives to Home address. At the arrival, you can step out at your entrance, and the car will park itself ready for the next driver’s call at a button’s tap. Over-the-air technologies also add necessary safety features such as emergency braking, side, front or back collision warnings, and the adjustment of high-low beams according to the situation.
In spite of the multiple advantages of Tesla’s Autopilot, there has been discussion about whether it uses the right technology or not. Tesla refuses to use LIDAR, while relying heavily on radar and sensor technologies, still innovative and “work-in-progress” for the market (Quain, 2019). These choices are associated with 4 fatal car accidents that happened since the emergence of Tesla.
According to Tesla’s strategy, the mere sales of autonomous vehicles is a plan for upcoming years but in order to really expand, the company targets for providing self-driving services as robo-taxis for consumers who do not have a driving license, don’t want to own a car or cannot drive for other reasons including health. Tesla considers providing a marketplace for the owners of their electric vehicles, and those who would like to use them as a service.
Some of the influential competitors for Tesla in this segment include Google (WayMo), BMW, Audi, NVIDIA, Apple (Drive.ai), Aptiv, and many others. The ultimate goal of these players is to win the race in the V2X (vehicle-to-everything) capabilities, and this is a challenging battlefield.
Tesla aims for the future making sure they invest into the research and development of new technologies such as ADAS in order to benefit the entire automotive industry. One of the undeniable competitive advantages is that Tesla uses the data collected using their current autonomous vehicles in order to train its driving assistance systems for the future.
Conclusions
There is still space for development of autonomous driving as only Level 2 and 3 is available in today’s electric vehicles’ market. However, the technology behind it is accelerating every year which means that it might take up to 5 years for us to experience Level 5 fully autonomous cars. It offers both risks as opportunities, as sensors that Tesla uses in their vehicles can be replaced almost every month for a better, more sophisticated version (Lambert, 2020).
Major debate is surrounded around the choice of technology: LIDAR cameras that build 3D models of objects and used by all more conservative car producers versus other semi-automated, radar- and sensor-based solutions like the ones offered in Tesla cars. It is hard to tell which choice is the best but definitely every option becomes more data-driven and safe with the course of research and development in this sphere.
Most of the players are targeted at not only producing self-driving cars but conquer the “robo-taxi” market that covers all the population and can offer self-driving services to consumers who would not typically drive or own a car. It is a battle among tech giants like Google and Apple, niche and dedicated players like Tesla, and the traditional car producers such as Honda, BMW or Audi. Extra players such as BlackBerry or Samsung offer complementary technologies that speed up the development of the perfect self-driving vehicle even further.
Even though the software part of AI plays an important role in Tesla’s operations, the success is the choice of specific combination of hardware and software, as well as the physical materials appropriate for data collection. The innovation in neural network accelerator chips definitely give Tesla another heads up in the race for the perfect autonomous car. The growth of the market will significantly improve this technology as the AI will get more and more training data to make decisions and steps made by algorithms more efficient. Every Tesla car is a data collector setting the pathway for all existing and future Tesla vehicles that drive in similar circumstances or in the same area.
Due to the emerging multiple business models that self-driving car market players are testing, there is a strong need for separate legislation of the autonomous vehicles. The legislation is in process but mostly connected to safety and participation of the driver in the driving process as car crashes are associated with the lack of human interaction (Baldwin, 2021).
References
Baldwin, Roberto. (2021). Government Agencies Clash over How Much to Regulate Self-Driving Cars, Car and Driver Blog, https://www.caranddriver.com/news/a35844915/ntsb-letter-nhtsa-self-driving-vehicles/
English, Trevor. (2020). How Do Self-Driving Cars Work? Interesting Engineering, https://interestingengineering.com/how-do-self-driving-cars-work.
Deloitte Report. (2018). Autonomous Driving: Moonshot Project with Quantum Leap from Hardware to Software & AI Focus, https://www2.deloitte.com/content/dam/Deloitte/be/Documents/Deloitte_Autonomous-Driving.pdf.
Lambert, Fred. (2020). Tesla is working on HW 4.0 self-driving chip with TSMC for mass production in Q4 2021, report says, Electrek,
Nasseri, Ali, et al. (2020). 2020 Autonomous Vehicle Technology Report: The guide to understanding the state of the art in hardware & software for self-driving vehicles. Wevolver Blog, https://www.wevolver.com/article/2020.autonomous.vehicle.technology.report.
Neiger, Chris. (2020), 3 Top Autonomous-Vehicle Companies to Watch, The Motley Fool, https://www.fool.com/investing/2020/01/17/3-top-autonomous-vehicle-companies-to-watch.aspx.
NVIDIA Website. (2021). NVIDIA Drive AGX, https://www.nvidia.com/en-us/self-driving-cars/drive-platform/hardware/.
Schaffer, Melanie. (2021). BlackBerry Patent Application For Autonomous Vehicle System Approved: What’s Next? Yahoo! Finance, https://finance.yahoo.com/news/blackberry-patent-application-autonomous-vehicle-212756294.html
Schmidt, Bridie. (2021). Tesla inks deal with Samsung to develop new nano chip for autonomous cars, The Driven Blog, https://thedriven.io/2021/01/27/tesla-inks-deal-with-samsung-to-develop-new-nano-chip-for-autonomous-cars/.
Tesla Website. (2021). Autopilot, https://www.tesla.com/autopilot.
Tesla Website. (2021). Autopilot AI, https://www.tesla.com/autopilotAI.
Quain, John R. (2019). These High-Tech Sensors May Be the Key to Autonomous Cars, The New York Times, https://www.nytimes.com/2019/09/26/business/autonomous-cars-sensors.html.