Author: admin

  • Black Box Payslips at UberEATS

    Black Box Payslips at UberEATS

    Since the introduction of agentic ai systems at UberEATS in Netherlands in the summer of 2025 it has become unclear how delivery workers get paid.

    Income is hidden from workers. We dont know why we are paid for what work. Bonuses dont make sense and are agressively denied when we near target completion.

    Income statement dont specify distance, quest and other types of bonuses. There is no way to check pur work against what we ver paid.

    The point of incentives is to pay workers more for more work. Right now, the incentives are not working at all, because we have no way to check what we are doing. Realtime app feedback and income specifications and even order progress is turned off since the ai test pilot.

    Payslip. Getting paid: but for what exactly?! Do the bonus incentives even count at all? Is the ai hallucinating payments?
  • Lawfull Prompting

    Companies like Uber Eats do not have the intention to manage lawful operation. Right now (2025-2026) the company is aggresively trying to prove that ai can replace office and worker personnel in the Netherlands and other places.

    When UberEATS started a partnership with OpenAI in the summer of 2025, the first thing that became apparant to delivery workers is that Uber had zero intention to use the new capabilities to improve the work and make delivery a good experience for customers, restaurants and the cities they are allowed to operate in.

    The availability of powerful ai capabilities to Uber HQ personnel has resulted in a new form of personalized worker exploitation.

    One of the ways to combat this inhumane form of worker rights abuse is that politicial leaders can legally bind companies to inject legal documents into the prompts that uber teams have recklessly started to use.

    The age of simple algorithms is over. Companies havw started using ai prompts to manage work. There are no managers: its all ai agents.

    However legal oversight is straightforward and easy. Simply make it legally required to inject all prompts with legal documents, especially for incompetent companies like UberEATS and all companies that use the new tech that interfaces with workers.

  • Personalized Exploitation through Extreme Information Assymetry between Worker and Ai Agents at UberEATS

    Personalized Exploitation through Extreme Information Assymetry between Worker and Ai Agents at UberEATS

    UberEATS is known to be a toxic company by the riders that have worked with it from the first days of its operation. We riders like to do delivery work, but ubereats is in our way.

    From the start, ubereats uses malicious tactics ranging from hidden algorithms to purposefully designed mechanisms to pay less than minimum wage to its workers (like the guaranteed minimum wage rate in 2019-2020 in the Netherlands).

    However, with the introduction of ai to the platform, things feel different and not in a good way.

    The uber company started a partnership with openai in the summer of 2025. The first changes we workers experienced was closing off our personal metrics that used to be open and transparent for us. Metrics like bonuses were hidden from us. Customer feedback was hidden to. Then restaurant feedback metrics started to be hidden too. We got new adherence requirements and group assignments based on adherence. The adherence metrics were invisible and only accessible after a shift.

    The platform was started to be closed off for riders to prepare for ai agents from openai.

    A few months after introduction ghe nightmare started. We delivery workers don’t even know if we get orders that are designed to make us miss the promised bonuses barely, but with the most amount of hard work from our side.

    AI is proving to be a perfect system for worker exploitation dialed up to the max.

    Information assymetry

    The ai knows everything about the delivery worker and all other workers: speed of delivery, customer feedback points, restaurant feedback points, vehicle use, order pickup preference, order history, home return adress.

    Delivery workers are on the frontline of new tech developments. From the start of app-based work to the introduction of ai prompts as managerial agents during work, delivery workers are the first to notice changes in technology.
This screesnhot captures how workers exchange experiences with reckless ai prompting at ubereats in 2025-2026.

    There are no laws and rules that specifically limit the use of ai for workers. The ai at companies like uber will use all information it can to exploit you and sabotage delivery workers. this sounds insane but its happening right now to many riders. An ever increasing stream of worker reports are flowing in.

    Ai systems use all available information to execute their prompts. If it can do something, it will. Delivery workers are trying to figure out how the ai agents are using the available data to suck the most amount of work out of individual workers.
    The ai agent uses perfect information to suck the most out of each workers individually and pay each worker individually (everyone has a different ampunt of pay they are willing to accept), and chooses to prefer workers that are willing to do the most for the least amount of pay.

    If the AI can give preference to someone with faster order completion (and other ai-judged higher metrics) it will give available shifts to that person first. and then to other workers with lesser metrics. The workers will never know whats happening.

    The ai will make a perfect comparison between differences between workers and prefer the workers with higher productivity rates – but without paying anyone more-.

    Work incentive bonuses are used to dial in a personalized work regime that sucks the most out of a worker while paying the least: customized for that specific worker individual!

    Delivery workers share experiences with the rollout of the ubereats ai agent system in 2025-2026 in the Netherlands. Lots of riders report serious problems and how they are disadvantaged (paid unfairly) by the ai and its implementation at ubereats.

    Solution: Ban UberEATS

    In the specific case of meal delivery; city governments like Amsterdam and Rotterdam should ban companies like uber from operating in their cities.

    There are too many ways to abuse ai to exploit workers. Good rule are impossible to implement. If any: complete prompt transparency (the history of prompts uber is using in each specific time period) would help, but there are other ways to design a system that is toxic for workers.

    It comes down to good will and companies like uber have proven over and over again they clearly dgaf.

    Banning ubereats will provide space for better delivery companies to enter the market. New companies will improve meal delivery in busy cities for customers, restaurants and workers.

    Cities should welcome app-based companies that do not have a clear track record of malicious practices (e.g. deliveroo).

    Despite uber giving meal delivery work a bad name, good companies actually do exist (deliveroo for example). Deliveroo are the real tech pioneers that deserve to operate inside cities like Amsterdam and Rotterdam.

    However, when a company like uber proves over and over again that it will do everything it can to exploit and maltreat its workers, it should be banned.

    Companies should not exploit their workers to prove that ai is worth the trillions that are invested in it. It will backfire. And this should backfire. Ban UberEATS in Amsterdam and Rotterdam.

  • Delivery Workers Pay the Price for AI Slop at UberEATS

    Delivery Workers Pay the Price for AI Slop at UberEATS

    cover image by luisdmp94
    The ai can know when restaurants are closed, so it can use this to sabotage delivery workers who cant afford to cancel an order, by sending them to the closed restaurant(riders have to cancel when a restaurant is closed). If you think this wouldnt happen, think again.
    Alexis: “we delivery drivers have to pay for ubers mistakes”.
    Elevators cause trouble when doing a work as a delivery worker because network connections are lost. 1 mistake like doing anything inside the ubereats app, and it costs the worker a hard-earned bonus and a lower group assignment (less shifts to plan) on top of that.
    The ai system creates an environment where mistakes and accidents can happen easily. To address these problems workers need to go through complicated ticket systems, that were designed for software issues and not real-world real problems. The suppert system is designed to drain workers efforts so they dont even try. Most riders dont ever try to address issues by reqcing out to another ai bot.
    A rider points out correctly that it is illegal (and outright reckless) to use these kinds of systems to manage work and points to AVG law.

  • Worker Sabotage by OpenAi + UberEATS

    Worker Sabotage by OpenAi + UberEATS

    Several delivery workers started reporting (oktober 2025 am nwards in rotterdam and amsterdam) that they are stopped from getting normal orders when they near bonusses.

    So when a worker nears the conpletion of a bonus the openai algorithm offers orders in another city that will ensure the bonus is impossible to reach by the worker.

    Being sent to a far-off destination as a worker on your last trip for a bonus is a common complaint from uber delivery workers in 2025 in Netherlands.
    Ai optimization by sabotage.

    At the same time the ai demands that all orders have to be accepted to remain eligible for the bonus.

    Delivery workers respond to expectedly by sabotaging the ai right back at it.
    The openai-ubereats ai is trying to avoid paying workers bonus incentives.

    There is no realtime feedback inside the app either. Workers receive a performance report 7 days later.

    Delivery workers share their frustrations about the new ai system at ubereats in 2025.
    A delivery worker reporting that they get plenty of orders in the morning, but not so much towards the end of their shift in the evening; so they miss the bonus target.
    A delivery worker reports that the last order they accepted was at a restaurant were they had to wait long enough to miss the bonus payment. The ai already knew this information (the worker received 3 repeated orders to pick up this specific order). Ultimately the worker gave up and missed the bonus, while having worked a full working day to try and work hard to get it.

    How To Scam Employees with Advanced Reasoning Models

    Screen capture from UberEATS Driver app that refuses to present orders to a delivery worker that reaches the final stage of a bonus. Bonus incemtives are used to get riders to do more orders, but the ai is using advanced reasoning ai models to not pay the worker and sabotage work.

    Link to screencapture here.

    The AI Horror Show Continues

    The ai is using all available information to sabotage worker statistics so riders can be denied payment while working the hardest they can. We think we can make the bonuses but the ai misleads us by giving us insane orders that ensure we get paid too little.

    Its like having a very ill sadistic manager that does everything in his/her power to sabotage your work.

    Delivery worker anacyoung reaction: “I received my daily performance and they must be kidding. It says I received 20 orders and only accepted 7. After 8pm (my shift ended 8:30) I was receiving only orders from places that were 20-30min away from me. However, it as telling that the place was like 15min away from me and the total time was 12min. The total time was always under the time I needed to arrive to the place, which doesn’t make any sense. Should I talk now to a coordinator or should I wait for the groups and appeal?”
    anacyoung: “I ended up with 35% acceptance rate”

    What To Do as a Delivery Worker

    Support at ubereats are ai agents. They dont understand what we are trying to say, or dont want to understand or even worse: they think its a normal part of the job to be incentivized by bonuses and then be sabotaged by an ai.

    The best thing to do is to explain to customers what is going on and ask customers to reconsider where they order food.

    Also building a case against ai practices that are not transparant. Hopefully this post and others on goridergo and elsewhere will help with the fight against the use of toxic ai.

    More info

    UberEATS has a partnership with OpenAI.

  • Miserable AI Pilot at UberEATS in Amsterdam & Rotterdam 2025

    update (october 10): the disastrous ai implementation at ubereats is leading to formal/juidiciary complaints about ai use generally. Ai is not the problem: the problem is UberEATS' reckless implementation of ai. 

    UberEATS started an ai pilot project powered by openai in the summer of 2025.

    It has resulted in a horrible and miserable experience for customers, delivery people, office personnel and restaurants.

    The ai is not improving. Its only getting worse for customers, riders, office personnel: everyone.

    Waiting for an hour for order pickup, together with 15 other riders because of ai rules that prohibit canceling orders by riders. Canceling the order is punished by ai so waiting is the only option. No one is in a rush to deliver when the c*nt ai makes up dumb rules.

    UberEATS doesnt care about delivering meals (in 2025). The goal of UberEATS in 2025 is to prove ai can microcontrol working people. Proving ai is more important than happy customers, so the pilot project continues (as of September 26th 2025).

    The ai doesnt seem to understand that routes matter. if point A and point B are in opposite ways, things will take more than 1 hour. But the ai system doesnt allow delivery people to freely pick orders, so simple orders take incredibly long, without anyone to blame: except the people that started the ai pilot project (but they simply blame the ai).

    Ai Accountability

    Office personnel are not being listened to (and can’t do anything). Delivery people complain but no one is listening. Customers wait for hours for their meals to arrive, but no one cares. Office personnel hear the complaints but are not allowed to do anything. They have kind of already been replaced by a crappy ai basically.

    UberEATS has turned from a meal delivery company to a prove-the-ai-at-all-costs company.

    Ai stock still have to prove its value: the 10 trillion investment into it since 2021. UberEATS is being used to prove it. But will it work (because its not working🚨)? What will a failed ai pilot do to stocks?

    Off course, the reason is that UberEATS investors have invested a lot more into ai and their ai investment is more important than their investment in a delivery company.

    UberEATS doesn’t care about delivery in 2025. It cares about ai, but its doing it the wrong way.

    Ai Pilot Project at UberEATS

    Problem is that the ai pilot isn’t working at all. And it was obvious from the first week.

    The pilot project reveals some big problems with ai.

    First: accountability. Its easy for managers and ceo’s to simply blame everything on ai. They can’t be held accountable. Its easy to shift blame to the ai.

    Second: the ai is only as good as the workers that use it: c*nt people = c*nt ai. Ai is an extension of people, not a replacement. Its a tool that makes workers incredibly productive, but only if used correctly.

    Ai enables microcontrol

    In this ubereats ai pilot the teams at uber tried to use the ai in the most toxic way: to microcontrol and force people to work as if they get paid big bonusses, but they arent.

    When support is recklessly abondoned to ai, you get chaos.

    Ai enables microincentives too

    The correct recipe to implement the ai pilot is off course – obviously – to use the ai to award and incentivise delivery people to earn MORE money, not less.

    Uber decided to go with meth, instead of math, haha!

    There are many awesome ways to use ai that actually improve delivery and not make it into the nightmare it is right now in Amsterdam & Rotterdam for delivery people, customers and office personnel.

    The ai makes mistakes when calculating important metrics that determine pay. It also sends random messages. Somehow its always to the disadvantage of the riders. The above shows that the ai sends two different messages to a worker. Later the worker found out the more disadvantageous message was the most one that mattered.

    The Future

    Ai can and will make delivery and all other work better, because it will make it easier for people to make more money by workers in less time (because of higher productivity).

    ai will make work more free and flexible and provide higher earnings. Thanks to tech, all minimum wage jobs will not be minimum wage jobs anymore and rise to very well paying jobs, because of the gain in productivity. App-based Deliverydelivery really isnt a minimum wage job.

    In Amsterdam after the introduction of the ai pilot decisions were reversed to implement sane incentive bonusses that werent done by ai: so there is hope that someone at least is taking notice. However the damage is done: experienced riders left and only delivery people that are fine with doing a minimum wage job and do 1 order every 2 hours who dont care about higher bonus pay stay.

    The ai pilot at uber eats in the Netherlands destroyed the productivity of delivery people and customer satisfaction along with it.

    Ai will improve work but it won’t happen at UberEATS in 2025.

    Microcontrol through notifications by the UberEATS Driver app in 2025 during the ai pilot project.

    When ai falls into c*nt hands we get c*nt ai.

    More Background👇

    https://openai.com/index/uber-enables-outstanding-experiences/

    https://pluralistic.net/2025/09/27/econopocalypse/#subprime-intelligence

    https://www.wsj.com/tech/ai/ai-bubble-building-spree-55ee6128

    https://news.ycombinator.com/item?id=45399893

    More Conversation

    https://forum.goridergo.com/t/miserable-miserable-ai-pilot-at-ubereats-in-amsterdam-rotterdam-2025/49