DoorDash hopes to reduce abusive and inappropriate interactions between consumers and delivery workers with a new AI-powered feature that automatically detects offensive language.
Dubbed “SafeChat+,” DoorDash leverages AI technology to review in-app conversations to determine if a customer or Dasher is being harassed. Depending on the scenario, you have the option to report an incident and contact DoorDash's support team if you are a customer, or cancel your order immediately if you are a delivery person. If a driver is on the receiving end of abuse, they can cancel the delivery without affecting their rating. DoorDash also sends warnings to users to refrain from using profanity.
The company says its AI analyzes more than 1,400 messages per minute and covers “dozens” of languages, including English, French, Spanish, Portuguese and Chinese. Team members investigate all incidents recognized by AI.
This feature is an upgrade from SafeChat, where DoorDash's Trust & Safety team manually screens chats for abusive language. The company told TechCrunch that SafeChat+ is “the same concept.” [as SafeChat] But it's backed by even better, more sophisticated technology. Understand the nuances and threats that don't match specific keywords. ”
“We know that verbal abuse and harassment is the largest type of safety incident on our platform. By introducing this feature, we are increasing the overall number of incidents on our platform. We believe we can achieve even greater savings,” DoorDash added.
DoorDash claims that more than 99.99% of deliveries on its platform are completed without safety-related incidents.
The platform also has an in-app toolkit called SafeDash that connects Dashers with ADT agents, allowing them to share location and other information with 911 services in an emergency.