Tag: ai slop

  • Uber will Lead the Inevitable AI Crash

    Uber will Lead the Inevitable AI Crash

    It’s not technologies that cause crashes, it’s expectations.

    Ai is expected to solve every problem imaginable in the best way possible magically. But Ai is not Plug & Play, it requires competent hard-working teams to use it successfully.

    Delivery workers and taxi drivers are the first to experience new technologies deeply and benefit or suffer from them first.

    When the iPhone + Appstore grew big, we were the first to start using apps as a platform to do our jobs.

    And we liked it! Thanks to algorithms and apps we were able to get paid more, because we were more productive than traditional delivery workers: and the job was flexible too! Companies like Deliveroo showed what great opportunities for great work were possible with apps and algorithms.

    However, from the start there was one platform specific company in particular that stood out as bigger and more dgaf platform in logistics worldwide: Uber.

    The Uber Ai Boom of the 20’s

    Uber in the AI boom is the perfect candidate that for showing that Ai is not plug & play as the stock market expects it to be. Ai augments workers immensely, but it can’t replace workers. Expecting it to replace worker decision-making leads to business disaster.

    Uber’s history of reckless business practice and disregard for its customers, workers and partners make it the perfect candidate to lead the stock market crash. It also has the stomach required to eat losses and discontent and lawsuits more than other platforms.

    Ai is not Plug & Play

    Stock markets don’t crash because of technologies; they crash because of false expectations.

    Ai is not Plug & Play.

    Succesful Ai implementations require competent Ai teams that understand the business domain.

    You can’t just fire everyone and feed the ai the business data and simply ask it to improve productivity. Unfortunately that’s what seams to have happened at Uber and UberEATS, resulting in lawsuits and gross violations of labor laws by Uber.

    UberEATS’ summer ai experiment in the Netherlands shows what a disaster Ai can be. The company is hellbent on proving the viability of its ai experiment.

    Its ai is denying workers bonuses, paying workers separately based on worker characteristics and its getting worse over time.

    AI Expectations

    The promise of ai is that it can independently improve work processes on its own: but it can’t.

    The big problem is that it can and will learn the wrong lessons if it isn’t corrected continuously and immediately by decision-makers.

    It doesn’t matter how advanced the reasoning models are. Agency and power-to-decide can’t be coded. When the ai has been run for a duration of time in which it learnt the wrong lessons, it wont be able to recover.

    Right now in 2025-2026 the Uber AI pilot project is burying itself into u healthy business practices every passing day: even denying riders pay, if they dont do what the ai expect them to do. Workers are bullied and fired off the platform without clear reasons. Complaints are recklessly left to the national justice system, because Uber’s own systems are unclear and purposefully designed to demotivate workers in getting help.

    Ai Over Everything

    Customers, restaurants and riders are scammed out of service and money by Uber, but the ai boom must go on. Uber is all in on AI and it wont let its workers or customers get in its way.

    When to Short Stocks

    Uber likely will be the company that will lead the stock market crash.

    The ai pilot project has resulted in a disastrous delivery experience for riders, customers and restaurants in amsterdam, rotterdam and other cities in Netherlands in 2025 so far.

    The ai is learning the wrong things and its getting worse. I expect that the summer of 2026 will be when Uber Eats will break. The markets will know and this will set off the market crash🪄

  • Personalized Exploitation through Extreme Information Assymetry between Worker and Ai Agents at UberEATS

    Personalized Exploitation through Extreme Information Assymetry between Worker and Ai Agents at UberEATS

    UberEATS is known to be a toxic company by the riders that have worked with it from the first days of its operation. We riders like to do delivery work, but ubereats is in our way.

    From the start, ubereats uses malicious tactics ranging from hidden algorithms to purposefully designed mechanisms to pay less than minimum wage to its workers (like the guaranteed minimum wage rate in 2019-2020 in the Netherlands).

    However, with the introduction of ai to the platform, things feel different and not in a good way.

    The uber company started a partnership with openai in the summer of 2025. The first changes we workers experienced was closing off our personal metrics that used to be open and transparent for us. Metrics like bonuses were hidden from us. Customer feedback was hidden to. Then restaurant feedback metrics started to be hidden too. We got new adherence requirements and group assignments based on adherence. The adherence metrics were invisible and only accessible after a shift.

    The platform was started to be closed off for riders to prepare for ai agents from openai.

    A few months after introduction ghe nightmare started. We delivery workers don’t even know if we get orders that are designed to make us miss the promised bonuses barely, but with the most amount of hard work from our side.

    AI is proving to be a perfect system for worker exploitation dialed up to the max.

    Information assymetry

    The ai knows everything about the delivery worker and all other workers: speed of delivery, customer feedback points, restaurant feedback points, vehicle use, order pickup preference, order history, home return adress.

    Delivery workers are on the frontline of new tech developments. From the start of app-based work to the introduction of ai prompts as managerial agents during work, delivery workers are the first to notice changes in technology.
This screesnhot captures how workers exchange experiences with reckless ai prompting at ubereats in 2025-2026.

    There are no laws and rules that specifically limit the use of ai for workers. The ai at companies like uber will use all information it can to exploit you and sabotage delivery workers. this sounds insane but its happening right now to many riders. An ever increasing stream of worker reports are flowing in.

    Ai systems use all available information to execute their prompts. If it can do something, it will. Delivery workers are trying to figure out how the ai agents are using the available data to suck the most amount of work out of individual workers.
    The ai agent uses perfect information to suck the most out of each workers individually and pay each worker individually (everyone has a different ampunt of pay they are willing to accept), and chooses to prefer workers that are willing to do the most for the least amount of pay.

    If the AI can give preference to someone with faster order completion (and other ai-judged higher metrics) it will give available shifts to that person first. and then to other workers with lesser metrics. The workers will never know whats happening.

    The ai will make a perfect comparison between differences between workers and prefer the workers with higher productivity rates – but without paying anyone more-.

    Work incentive bonuses are used to dial in a personalized work regime that sucks the most out of a worker while paying the least: customized for that specific worker individual!

    Delivery workers share experiences with the rollout of the ubereats ai agent system in 2025-2026 in the Netherlands. Lots of riders report serious problems and how they are disadvantaged (paid unfairly) by the ai and its implementation at ubereats.

    Solution: Ban UberEATS

    In the specific case of meal delivery; city governments like Amsterdam and Rotterdam should ban companies like uber from operating in their cities.

    There are too many ways to abuse ai to exploit workers. Good rule are impossible to implement. If any: complete prompt transparency (the history of prompts uber is using in each specific time period) would help, but there are other ways to design a system that is toxic for workers.

    It comes down to good will and companies like uber have proven over and over again they clearly dgaf.

    Banning ubereats will provide space for better delivery companies to enter the market. New companies will improve meal delivery in busy cities for customers, restaurants and workers.

    Cities should welcome app-based companies that do not have a clear track record of malicious practices (e.g. deliveroo).

    Despite uber giving meal delivery work a bad name, good companies actually do exist (deliveroo for example). Deliveroo are the real tech pioneers that deserve to operate inside cities like Amsterdam and Rotterdam.

    However, when a company like uber proves over and over again that it will do everything it can to exploit and maltreat its workers, it should be banned.

    Companies should not exploit their workers to prove that ai is worth the trillions that are invested in it. It will backfire. And this should backfire. Ban UberEATS in Amsterdam and Rotterdam.