Review written by Aishwarya Mandyam (COS, G1)
The prevalence of self-driving or partially self-driving cars is increasingly within our sights. However, before these vehicles can make their mainstream debut, it’s important that consumers are ensured of their safety. In particular, these cars should be able to “see” and “act” like human drivers. As such, a self-driving car’s collision and error detection system must effectively reason about what it cannot see. A big concern with self-driving cars is the question of safety; we expect that a human driver has sufficient intuition to avoid pedestrians and stationary objects. Naturally, we would expect self-driving cars to exhibit the same level of caution for avoiding collisions. In a new paper out of Professor Felix Heide’s group in the Princeton COS department in collaboration with Mercedes Benz, Scheiner et al. introduce a method that uses Doppler radar to enable cars to see around corners. Scheiner et al.’s method to track hidden objects allows for a more effective error detection method that can make these vehicles better suited to operating in the real world where safety is a primary concern.
Review written by Andy Jones (COS, GS)
Understanding the link between neural activity and behavior is one of the long-running goals of neuroscience. In the information age, it is becoming more and more common for neuroscientists to take a data-driven approach to studying animal behavior in order to gain insight into the brain. Under this approach, scientists collect hours’ or days’ worth of video recordings of an animal, relying on modern machine learning (ML) systems to automatically identify exact locations of body parts and classify behavior types. These methods have opened the door for more expansive studies of the relationship between brain activity and behavior, without relying on laborious manual annotations of animal movements.
Written by Anika Maskara ‘23 & Thiago Tarraf Varella (PSY GS)
It is common in popular culture to imagine human decision making as a clash of two distinct choices. There is a “good option” and a “bad option,” an angel or a devil sitting on our shoulders. Like many dichotomies, though, that view of decision making is misleading. It is true that research suggests we have two different decision-making systems that sometimes disagree about which action to take, but neither is better or worse than the other; they simply use different algorithms to help us decide what to do.
Review written by Shanka Subhra Mondal (ELE)
In recent years, machine learning (ML) has become commonplace in our software and devices. Its applications are varied, ranging from finance and marketing to healthcare and computer vision. ML already has the ability to out-perform humans on many tasks, such as video game competitions and image/object recognition, to name a few. At a high-level, ML comprises a set of algorithms that rely heavily on data (training data) to make decisions about another set of data (testing data) that it has not previously encountered. One of the sub-fields of computer vision where machine learning has proven particularly useful is in image classification, where the goal is to categorize objects in a given image. While this task might sound easy for humans, it can be challenging for an algorithm, particularly when the picture is blurred, not properly illuminated, or noisy. Robust image classification is not an easy task.