For next-generation driving functions and sensors, integrated sensor fusion approaches like the Dynamic Grid overcome the limitations of current solutions and enable future driving functions. Typical driver assistance systems or automated driving functions consist of several components: One or multiple sensors, sensor fusion algorithms, the driving function, and the actual vehicle control like steering, throttle, and brake. Current generation ADAS like AEB, ACC, and lane-keeping operate in well-structured environments and need to be aware of similar object types in a limited number of scenarios. For this, low-resolution camera, radar, and LiDAR sensors are used in combination with well-established algorithms like Kalman filtering and static occupancy grids. While this approach has advantages like a high modularity, it often fails in more challenging scenarios that are part of next generation driving functions and automation levels. To overcome the limitations of current sensor fusion approaches, we propose an integrated sensor fusion approach – the Dynamic Grid – that jointly determines dynamic objects, the static environment and free space. The Dynamic Grid incorporates data from cameras that provide semantic point clouds, high resolution radar and LiDAR sensors. It operates on a low data level and does not require further preprocessing. With this approach, high detection and low false alarm rates can be achieved while still being realtime capable on typical automotive CPUs. © 2022, VDI Verlag GMBH. All rights reserved.