Skip to main content

IBM’s A.I. Mayflower ship is crossing the Atlantic, and you can watch it live

“Seagulls,” said Andy Stanford-Clark, excitedly. “They’re quite a big obstacle from an image-processing point of view. But, actually, they’re not a threat at all. In fact, you can totally ignore them.”

Stanford-Clark, the chief technology officer for IBM in the U.K. and Ireland, was exuding nervous energy. It was the afternoon before the morning when, at 4 a.m. British Summer Time, IBM’s Mayflower Autonomous Ship — a crewless, fully autonomous trimaran piloted entirely by IBM’s A.I. , and built by non-profit ocean research company ProMare — was set to commence its voyage from Plymouth, England. to Cape Cod, Massachusetts. ProMare’s vessel for several years, alongside a global consortium of other partners. And now, after countless tests and hundreds of thousands of hours of simulation training, it was about to set sail for real.

Stanford-Clark was running through the potential risks. Seagulls, he pointed out, were something of a false alarm. From an image-recognition perspective, they were a challenge because they had a tendency of getting right up in the camera lens so that they looked like enormous winged obstacles that needed to be avoided at all costs. But they had a tendency to fly away as soon as the futuristic, five-ton triple-hulled ship got close. The biggest headache seagulls posed was that they were an extremely common obstacle that the Mayflower had to be instructed to totally ignore — against all its obstacle-avoiding instincts.

Recommended Videos

The challenge of sailing a ship autonomously isn’t the same as driving an autonomous car. An autonomous car means steering down predefined streets, watching out for other cars, buses, cyclists, and pedestrians, all while interpreting road scenes at high speed. In the open ocean, lanes are wider, population density is lower, and events happen far more slowly (although turning circles and stopping distance are also significantly worse). There is little risk of loss of life when an A.I. pilots a robot ship across the Atlantic Ocean compared to a self-driving car driving through your average American city during rush hour.

Oliver Dickinson for IBM/ProMare

But there is nonetheless a big challenge here: Namely that the Mayflower Autonomous Ship will be performing its three-week autonomous crossing, which commenced June 15, with zero in the way of human interference. Everything is being carried out autonomously. While the course has been set, any deviation on that course — from responding to weather conditions to avoiding obstacles larger than a seagull, is carried out by the ship’s A.I. Captain, built by startup MarineAI, based on IBM’s A.I. and automation technologies. Any big mechanical failure (all too easy when you’re sloshing around in the open ocean) and suddenly one of the world’s biggest autonomous vehicles becomes as useful as a laptop left overnight in a full bathtub.

For folks like Stanford-Clark, it’s a source of stress. For fascinated onlookers, who can tune in to watch every step of the Mayflower Autonomous Ship’s progress via a livestream dashboard built by IBM iX – the company’s digital agency — it’s just another part of the intrepid adventuring fun.

Alone together

Oliver Dickinson for IBM/ProMare

The late comic Patrice O’Neal once joked that he liked to be alone, but not lonely. That same sentiment could be applied to the Mayflower: It’s carrying out its cross-ocean voyage solo, but fans from around the world can tune in to watch its progress. Thanks to IBM’s MAS400 dashboard, it’s possible to get a livestream taken from the vessel’s onboard cameras. There are six cameras in all, and these swap in and out to provide a few of the ship’s surroundings.

In 2021, livestreaming is no big deal, of course. The ability to instantly stream video around the world with minimal latency is so commonplace that we likely don’t stop to marvel at it. But livestreaming from the middle of the ocean is very different from livestreaming from your backyard.

“What people don’t realize is once you get more than just a handful of miles offshore, there’s no cell phone signal,” said Stanford-Clark. “Then all bets are off. All the solutions [at that point] become very expensive and low bandwidth from that point on.”

Today, low bandwidth could just mean a YouTube video that takes a few seconds to load on 360p. No such luck here, though. While the onboard cameras record 1080p video, this feed is then transcoded in real time via ultralow bit rate encoding techniques to allow it to be transmitted in bandwidth that, at times, can be as paltry as 6kbps. That would be minuscule as data transfer speeds on a 1995-era modem. The low bandwidths are due to satellite connectivity, which, at best, tops out at 200kbps and also has to include the telemetry data.

Oliver Dickinson for IBM/ProMare

To help make this crazy dream a reality, ProMare and IBM teamed up with Videosoft , a company that specializes in developing the technology that makes it possible to livestream in incredibly challenging environments with minimal bandwidth. “Making sure that video gets through in the worst possible [environments is what we do],” Stewart McCone, CEO of Videosoft, told Digital Trends.

Videosoft has long developed algorithms and other tools for transmitting video in scenarios in which any video dropout could potentially be fatal. This includes clients like the police and military. The company’s technology is able to not only stream in low-bandwidth situations, but also to automatically adapt to available bandwidth to encode and transmit it at the highest possible quality.

McCone likened the overall challenge of streaming video from the middle of the ocean to trying to stream video footage from space. “It’s a very, very, very similar challenge,” he said.

In some ways, it’s even tougher, though. While there is no expectation that video from, say, a Mars rover will be in real time due to the distances involved, in the case of the Mayflower Autonomous Ship, the footage is intended to be live with a latency of a couple of seconds at worst. This negates the ability to do slow data transfers at a higher quality.

Capturing the public imagination

Oliver Dickinson for IBM/ProMare

The Mayflower Autonomous Ship isn’t IBM’s first bold televised challenge, of course. Its 1997, the Deep Blue series of chess matches with grandmaster Garry Kasparov captured the public’s imagination more than any other public A.I. demonstration of the last century. This century, its 2011 Jeopardy! showdown between question-answering A.I. Watson and show champions Brad Rutter and Ken Jennings was a ratings winner, garnering the show’s highest audience numbers in more than half a decade.

Will ProMare’s robot ship be a similar triumph of A.I. like both of those previous milestones? Or will it sputter to a halt somewhere in the middle of the ocean? Whatever happens, thanks to IBM’s dashboard — and some very smart compression technology — you’re able to tune in to follow along.

You can check out IBM’s Mayflower Autonomous Ship dashboard at MAS400.com .

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Apple’s hardware can dominate in AI — so why is Siri struggling so much?
Apple's Craig Federighi presents the Image Playground app running on macOS Sequoia at the company's Worldwide Developers Conference (WWDC) in June 2024.

Over the past year or so, a strange contradiction has emerged in the world of Apple: the company makes some of the best computers in the world, whether you need a simple consumer laptop or a high-powered workstation. Yet Apple’s artificial intelligence (AI) efforts are struggling so much that it’s almost laughable.

Take Siri, for example. Many readers will have heard that Apple has taken the highly unusual (and highly embarrassing) step of publicly admitting the new, AI-backed Siri needs more time in the oven. The new Siri infused with Apple Intelligence just isn’t living up to Apple’s promises.

Read more
This AI app boosts my productivity in a way that Apple Intelligence can’t
A microphone in front of a MacBook on a desk.

Apple Intelligence offers a bunch of interesting features, but if you’ve tried most of them for more than a few minutes, you realise they’re not quite up to scratch compared to the best artificial intelligence (AI) tools.

Image Playground is fine, for instance, but not particularly useful. Writing Tools work well enough but aren’t as ground-breaking as Apple might make out. And the less that’s said about Siri, the better.

Read more
I saw Google’s Gemini AI erase copyright evidence. I am deeply worried
Gemini Advanced on the Google Pixel 9 Pro Fold.

Update: Google has responded to Digital Trends’ queries. The story has been updated with company’s statement below.
The rise of generative AI has been a fairly messy process, especially from fair usage ethics and copyright perspective. AI giants are inking deals with publishers to avoid legal hassles, while at the same time, they are embroiled in copyright tussles in courts in multiple countries.
As the ravenous appetite for training AI on user data grows, we might be in for another ethical conundrum. Multiple users on X and Reddit have shared demonstrations of how Google’s latest Gemini 2.0 series AI model can remove watermarks from copyright-protected images.
Going by the before/after samples of images, it seems Gemini is doing a fairly good job of removing the watermarks. Notably, it’s not only erasing those banner-style watermarks, but also fairly complex overlays with design and stylized text elements.
https://x.com/ajiteshleo/status/1901484196972945872?s=61
The model in question is the Gemini 2.0 Flash, which was released earlier this year and got a reasoning upgrade, as well. It is worth noting here that you can not remove the watermark if you are using the mobile or desktop version of the Gemini 2.0 Flash model. Trying to do so returns a message like this:
“Removing watermarks from images without the explicit permission of the copyright holder is illegal in most countries. It is important to respect the copyright laws and intellectual property rights. If you want to use an image with a watermark, you should contact the copyright holder and ask for permission.”
You can, however, try and remove the watermark from images in the Google AI Studio. Digital Trends successfully removed watermarks from a variety of images using the Gemini 2.0 Flash (Image Generation) Experimental model.

It is a violation of local copyright laws and any usage of AI-modified material without due consent could land you in legal trouble. Moreover, it is a deeply unethical act, which is also why artists and authors are fighting in court over companies using their work to train AI models without duly compensating them or seeking their explicit nod.

How are the results?
A notable aspect is that the images produced by the AI are fairly high quality. Not only is it removing the watermark artifacts, but also fills the gap with intelligent pixel-level reconstruction. In its current iteration, it works somewhat like the Magic Eraser feature available in the Google Photos app for smartphones.
Furthermore, if the input image is low quality, Gemini is not only wiping off the watermark details but also upscaling the overall picture. .
https://x.com/kaiju_ya/status/1901099096930496720?s=61
The output image, however, has its own Gemini watermark, although this itself can be removed with a simple crop. There are a few minor differences in the final image produced by Gemini after its watermark removal process, such as slightly different color temperatures and fuzzy surface details in photorealistic shots.

Read more