/Englkfz-tech.de

Search

A     B     C     D     E     F     G     H     I     J     K     L     M     N     O     P     Q     R     S     T     U     V     W     X     Y     Z


Formelsammlung
All Tests
 F7 F9




 Working method




Although the path to autonomous driving naturally involves the continuous refinement of assistance systems, quite a few accident researchers believe that drivers should be replaced altogether, as they pose the greatest accident risk. Then we would see what self-driving cars can achieve in terms of accident prevention.

Perhaps the acceptance of such cars can be increased by explaining how they see the world, what they are capable of perceiving, and how this enables them to react better than humans. After all, it is highly questionable whether the intensive further development of driver assistance systems will automatically lead to fully automated driving. There really seem to be big differences.

The trials in the USA with automated driving date back to initial trials in 2013 with normal drivers who were not involved in the project. Part of the experiment involved using the assigned car as normal in everyday life after only two hours of training. But the test vehicle was by no means perfect. This meant that the test subjects had to be constantly on the lookout for any malfunction of the vehicle.

Surprisingly, enthusiasm was high across the entire group, even among those who are otherwise known for being sporty drivers. Many saw the widespread availability of such cars to the population as an opportunity to no longer have to complain constantly about the driving skills of other motorists. However, even back then, there was a certain degree of credulity in the way such vehicles were handled, which seems to have been confirmed by subsequent accidents.

A kind of blind trust and incredibly distracting activities, even at higher speeds, were not really what one would have expected from the test subjects. And as became even more apparent later on, the better the technology, the more people rely on it. All in all, safety remains largely unchanged despite improved technology. This is a fundamental difference from a driverless system, which significantly increases traffic safety by excluding the driver.

Which brings us to the core problem outsiders have when it comes to automated driving, namely understanding everything engineers have implemented to ensure its capabilities. This means that such a car must always know where it is located. It learns this by comparing its perception of the environment with, for example, cartographic data. This naturally also includes what is currently happening around the vehicle.

Not only moving objects are perceived. It's difficult to say what the difference is between an object and, so to speak, the basic amenities of the area. Perhaps this is why driverless driving has so far only been possible in areas/roads that have been repeatedly mapped. So let us first assume that only details that change during the observation period receive attention as objects.

Some of the parts of the image that are not conspicuous due to movement can still be perceived as objects, namely by comparison with stored images that may occur in road traffic. Examples would, of course, be traffic signs, pylons with their typical shape, and, of course, road markings. These would then be findings that can be perceived despite the lack of updates.

When humans look at their surroundings, certain specializations take place, sometimes also referred to as focusing. The image obtained by sensor technology is already more complete. And when the resolution progressively exceeds that of the human eye, a level of complexity will be achieved that can be processed very well from the interpreting side as well, thanks to increasing computing power. It is easy to imagine that earlier detection alone can win time for reactions.

The latter is, of course, playing an increasingly important role compared to purely scanning sensor technology. While an assistance system can leave it largely up to the driver to decide in difficult situations, e.g., whether a situation requires the brakes to be applied, the autonomous system must bring the situation to a conclusive conclusion and ultimately take responsibility for the result.

Does the pedestrian only walk to the edge of the curb, or do they then cross the entire roadway? Is it sufficient to ease off the gas pedal in order to still have time for an emergency stop, and can additional steering movements support this process? At this point, at the latest, the difference between the two systems becomes clear, as does the fact that with a high proportion of automated vehicles, normal road traffic may actually slow down even further.

The particular significance of the prediction should be emphasized here. And with the use of high-resolution images and, of course, the results from the other sensors, similarities to computer chess arise. What are the consequences if the vehicle in front suddenly swerves? It's already driving suspiciously far to the left. What is the situation in front of this vehicle? If there is a construction site with pylons, typical traffic signs, and road markings, then the situation is clear.

In addition, the system is also capable of learning. Is approaching the left edge of a lane a particular indication of a spontaneous lane change? Is there any information available that this vehicle previously tended to drift from left to right and vice versa while driving in its lane? Conversely, is the tendency to veer to the right side of a lane an indication of such a surprise moment with dangerous consequences for following traffic?

Does the size of the object actually matter? Certainly. If we ignore the construction site and the truck is pulling to the left, then in this case, at least in Europe, a greater reduction in speed is to be expected than if it were just a passenger car. The automatic collection of truck tolls in Germany, which at that time only applied to vehicles with a gross vehicle weight rating of over 12 tons, demonstrates how effective such systems have been since 2005.

The problems are getting worse in the city. Pedestrians are perceived by such systems as moving objects with moving subsets. Depending on the geometric orientation relative to the entire object, these can be arms, legs, or the head. The analysis of leg movements in relation to those of the entire object already provides important insights into possible dangers. In addition to traffic signs, there are also traffic lights, whose lights may even require arrows to be interpreted correctly.

And again and again, the unprepared situations. Those construction sites again. What if the pedestrian standing on the side turns out to be a police officer? Now, and perhaps in the previous example as well, hand signals become important. After all, the expensive roof structures are gradually disappearing, and soon self-driving cars will look like the ones we are used to today. If the police want to stop the car and do not notice that it is driving autonomously, disregarding this could prove costly.

Arms are of course also important when a cyclist is riding ahead. Here, arm movements are often equivalent to turning on the turn signal in a car. How else can you get to the left-turn lane by bike? And while we're on the subject of rules, do you have to let a school bus with its left indicator flashing into your lane, even and especially if the lane in front of you is clear? But would the system also allow a transporter of the same size to enter? If that were the case, it could perhaps become a nuisance if it became more frequent.

A more complex solution, however, might be Car2Car, in this case between a (school) bus or police car and a fully automated car. This could significantly broaden the field of vision , for example by having a car in front report problem areas. Just think about weather conditions in winter, ice forming behind a curve, etc. A Car2X system could also help here, even before the vehicle in front gets into trouble.

It has already been mentioned that it is not only the actual collection of data that is important, but also its impact and storage. You may still have the image of a pedestrian broken down into pixels at the very beginning of an image sequence, and you can then save the result. That, for example, nothing is likely to happen with one posture, while another poses a risk of accident.

Here is an example from automotive engineering: In the past (and perhaps even today), there were masters of their craft who could hear what was wrong with a defective engine just by listening to the sound it made, and then assess whether it was still worth repairing. They were able to do this because they already had extensive experience in assigning a noise to damage assessment after disassembly. A computer system in a vehicle surpasses such experiences perhaps a thousand or a hundred thousand times over. And if the experiences are also easily interchangeable, the factor is correspondingly greater.







Sidemap - Technik Imprint E-Mail Datenschutz Sidemap - Hersteller