Enhancing Driving Safety and Efficiency via Smartphone Systems Open Access
Smart handheld devices have been more and more popular nowadays, which is why researchers have put enormous efforts in exploring the possibility of achieving environment sensing, movement tracking and localization via utilizing the context sensing capabilities of the smart handheld devices. In particular, smart phones play an important role in people’s driving activities. A lot of works have been done to enhance driving experience using smart phone systems. For example, smart phones can provide better navigation services than standalone GPS navigators as the smart phones can collect real-time traffic status, road construction information, and other special situations in order to plan the best routingoption for the users. While we enjoy the convenience brought by smart phones, such convenient user experience has inevitably introduced a great deal of distractions, which often tragically result in traffic accidents. Thousands of injuries have been reported as the result of driver phone use behaviors. Although governments have legislated laws to prohibitdriver phone use, law enforcement is still not enough to prevent drivers from taking calls or texting.My thesis work targets at enhancing driving safety and efficiency via smart phones. From driving safety’s angle of view, drivers making phone calls and texting have become a major source distractions, which leads to numerous injuries, or even death. My first work is designing a system that can automatically detect driver having phone calls. The systemconsists of a unidirectional microphone installed on the center of the steering wheel, the vehicle’s On-Board Unit (OBU), and a smart phone. The unidirectional microphone is to collect the driver’s voice. When there is an on-going phone conversation, the smart phone will start recording the voice speaking into the smart phone. Both voice sources can send the voice samples to the OBU to have the voice analyzed. I adopt the Mel- Frequency Cepstral Coefficients (MFCC) model to extract driver’s voice features from the samples collected by the unidirectional microphone. Then the Gaussian Mixture Model is used to determined if the person who is speaking to the smart phone is the driver. We have conducted real-world experiments and the results demonstrate high probability of detecting driver phone calls with a small false alarm rate.From the viewpoint of enhancing driving efficiency, I design a system that can assist current navigation system with lane-level navigation for highway driving. As well evolved as the navigation systems have, real-time lane-level positioning and navigation are not considered. In the system, the smart phone is assumed to be mounted on the windshield to navigate the driver. I take advantage of the smart phone context sensing ability to achieve lane identification by enabling the phone’s rear camera to capture the road view. The picture goes through a image processing center to identify the current lane in which the vehicle is. The smart phone accelerometers and gyroscopes keep monitoring the smart phone’s motion status to detect the change of lane change. For lane-level position identification, the system leverages computer vision algorithms to process the road view pictures. The pictures are pre-processed in order to minimize the computational overhead. Then the reduced images are analyzed by the Hugh-line detection algorithm where candidate lane marks are recognized. I designed an algorithm that took advantage of the symmetric properties of lane marks in the camera view to further identify the current lane-level position of the vehicle. After the lane-level position is obtained, the accelerometer and the gyroscopes are activated to collect motion data of the smart phone. I design a pattern recognition algorithm to detect lane change patterns in the collected sensory data to achieve lane tracking. We evaluated our system in real-world experiments, and the system can identify lane-level position with an accuracy of 91% and the lane tracking accuracy is 88.5%.
Notice to Authors
If you are the author of this work and you have any questions about the information on this page, please use the Contact form to get in touch with us.