CARLA(an open urban driving simulator)
论文:http://proceedings.mlr.press/v78/dosovitskiy17a/dosovitskiy17a.pdf
文档:https://carla.readthedocs.io/en/latest/
介绍:
CARLA包含三个模块的自动驾驶:① 经典的规则化无人驾驶 ② 端对端模仿学习无人驾驶 ③端对端强化学习无人驾驶
CARLA支持感知和控制两个模块,包含城市堵路(有汽车,建筑物,行人和道路指示标志),CARLA提供世界和智能体的接口,客户端API是python命令控制,以类似插槽(socket)的方式连接智能体和服务器。客户端client发送命令和下层指令,直接命令包括转向,加速和刹车,下层命令包括控制服务器的行为和重置仿真器,改变仿真环境和修改传感器参数。CARLA可以调整视觉信息质量和速度。CARLA有两个城镇,TOWN1用来训练,TOWN2用来测试。CARLA包含许多传感器,有RGB摄像头,提供深度信息的摄像头(该深度信息和语义分割,CARLA已经做好了,语义分割有12个种类:道路,道路线,交通灯,行人等等)GPS定位传感器,速度加速度传感器和碰撞传感器等等。
在CARLR中存在探索状态和行动,包含转向、节气门和刹车,包含传感器输入信息。
启动:
linux系统下安装各种依赖:
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get install build-essential clang-5.0 lld-5.0 g++-7 ninja-build python python-pip python-dev tzdata sed curl wget unzip autoconf libtool
pip install --user setuptools nose2
sudo update-alternatives --install /usr/bin/clang++ clang++ /usr/lib/llvm-5.0/bin/clang++ 101
sudo update-alternatives --install /usr/bin/clang clang /usr/lib/llvm-5.0/bin/clang 101
安装 Unreal Engine:
git clone --depth=1 -b 4.19 https://github.com/EpicGames/UnrealEngine.git ~/UnrealEngine_4.19
cd ~/UnrealEngine_4.19
./Setup.sh && ./GenerateProjectFiles.sh && make
安装CARLA:
git clone https://github.com/carla-simulator/carla
export UE4_ROOT=~/UnrealEngine_4.19
在指定路径下(下载CARLA的路径下):
CarlaUE4.sh
CARLA默认的TCP接口是2000和2001,可以通过以下命令修改:
-carla-port=N
如果需要运行示例样例:
python example.py
修改地图:
./CarlaUE4.sh /Game/Carla/Maps/Town02
配置:
快进训练时间:
./CarlaUE4.sh -benchmark -fps=5
修改摄像头和传感器的参数:
在Example.CarlaSettings.ini文件中修改,图片在服务器之间以BGRA数组的格式传送,用户也可以自己定义其他格式。
场景终端相机:(一般情况下选择在python中,后续将不再列出ini的修改)让整个场景看起来更加真实,在Python中:
camera = carla.sensor.Camera('MyCamera', PostProcessing='SceneFinal')
camera.set(FOV=90.0)
camera.set_image_size(800, 600)
camera.set_position(x=0.30, y=0, z=1.30)
camera.set_rotation(pitch=0, yaw=0, roll=0)
carla_settings.add_sensor(camera)
在CarlaSettings.ini中:
[CARLA/Sensor/MyCamera]
SensorType=CAMERA
PostProcessing=SceneFinal
ImageSizeX=800
ImageSizeY=600
FOV=90
PositionX=0.30
PositionY=0
PositionZ=1.30
RotationPitch=0
RotationRoll=0
RotationYaw=0
深度地图相机:
camera = carla.sensor.Camera('MyCamera', PostProcessing='Depth')
camera.set(FOV=90.0)
camera.set_image_size(800, 600)
camera.set_position(x=0.30, y=0, z=1.30)
camera.set_rotation(pitch=0, yaw=0, roll=0)
carla_settings.add_sensor(camera)
语义分割相机:将图像中的每一个目标进行分类
camera = carla.sensor.Camera('MyCamera', PostProcessing='SemanticSegmentation')
camera.set(FOV=90.0)
camera.set_image_size(800, 600)
camera.set_position(x=0.30, y=0, z=1.30)
camera.set_rotation(pitch=0, yaw=0, roll=0)
carla_settings.add_sensor(camera)
激光雷达:一个旋转的激光雷达,投射出周围的三维点云
lidar = carla.sensor.Lidar('MyLidar')
lidar.set(
Channels=32,
Range=50,
PointsPerSecond=100000,
RotationFrequency=10,
UpperFovLimit=10,
LowerFovLimit=-30)
lidar.set_position(x=0, y=0, z=1.40)
lidar.set_rotation(pitch=0, yaw=0, roll=0)
carla_settings.add_sensor(lidar)
benchmark agent:
agent 和 experiment suite 都需要用户定义
# We instantiate a forward agent, a simple policy that just set
# acceleration as 0.9 and steering as zero
agent = ForwardAgent()
# We instantiate an experiment suite. Basically a set of experiments
# that are going to be evaluated on this benchmark.
experiment_suite = BasicExperimentSuite(city_name)
# Now actually run the driving_benchmark
# Besides the agent and experiment suite we should send
# the city name ( Town01, Town02) the log
run_driving_benchmark(agent, experiment_suite, city_name,
log_name, continue_experiment,
host, port)
定义agent:这里的measurements传送回来的数据是agent的位置,方向,动态信息等等;sensor_data传送回来的信息是摄像头信息和雷达信息;Directions传送回来的是规划器发送的直行、右转、左转等信息;target传送回来的是位置和方向信息。函数会根据上述信息返回控制信息:转向角度、节气门开度、刹车制动力等。
from carla.agent.agent import Agent
from carla.client import VehicleControl
class ForwardAgent(Agent):
def run_step(self, measurements, sensor_data, directions, target):
"""
Function to run a control step in the CARLA vehicle.
"""
control = VehicleControl()
control.throttle = 0.9
return control
定义 experiment suite:
from carla.agent_benchmark.experiment import Experiment
from carla.sensor import Camera
from carla.settings import CarlaSettings
from .experiment_suite import ExperimentSuite
class BasicExperimentSuite(ExperimentSuite):
@property
def train_weathers(self):
return [1]
@property
def test_weathers(self):
return [1]
查看起始位置:
python view_start_positions.py
增加一些其他的选项:
# Define the start/end position below as tasks
poses_task0 = [[7, 3]]
poses_task1 = [[138, 17]]
poses_task2 = [[140, 134]]
poses_task3 = [[140, 134]]
# Concatenate all the tasks
poses_tasks = [poses_task0, poses_task1 , poses_task1 , poses_task3]
# Add dynamic objects to tasks
vehicles_tasks = [0, 0, 0, 20]
pedestrians_tasks = [0, 0, 0, 50]
定义实验向量:
experiments_vector = []
# The used weathers is the union between test and train weathers
for weather in used_weathers:
for iteration in range(len(poses_tasks)):
poses = poses_tasks[iteration]
vehicles = vehicles_tasks[iteration]
pedestrians = pedestrians_tasks[iteration]
conditions = CarlaSettings()
conditions.set(
SendNonPlayerAgentsInfo=True,
NumberOfVehicles=vehicles,
NumberOfPedestrians=pedestrians,
WeatherId=weather
)
# Add all the cameras that were set for this experiments
conditions.add_sensor(camera)
experiment = Experiment()
experiment.set(
Conditions=conditions,
Poses=poses,
Task=iteration,
Repetitions=1
)
experiments_vector.append(experiment)
定义评价标准:
@property
def metrics_parameters(self):
"""
Property to return the parameters for the metrics module
Could be redefined depending on the needs of the user.
"""
return {
'intersection_offroad': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 0.3
},
'intersection_otherlane': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 0.4
},
'collision_other': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 400
},
'collision_vehicles': {'frames_skip': 10,
'frames_recount': 30,
'threshold': 400
},
'collision_pedestrians': {'frames_skip': 5,
'frames_recount': 100,
'threshold': 300
},
}
上一篇: rosbag_pcd文件的相互转换