- Top249
- Posts
- OpenAI’s DevDay Preview, ICE’s 24/7 Surveillance, and Databricks’ 100× Boost
OpenAI’s DevDay Preview, ICE’s 24/7 Surveillance, and Databricks’ 100× Boost
Learn more, scroll less. Curated AI/Tech/Business news and articles.

Welcome to the latest edition of Top249 🤩
We're here to keep you updated on AI, tech, and business news and articles, so you can save time and focus on learning and growth.
Let's get started! 😎
Important: If you're reading this in the Promotions tab, please drag it to your Inbox and click ‘Yes’ when your email provider asks to do it for future emails.
The DevDay event is seen as a pivotal moment for OpenAI, as rivals like Google’s Gemini, Anthropic’s Claude, and Meta’s AI work intensify competition.
Observers expect OpenAI to possibly unveil a “ChatGPT browser,” pushing beyond just chatbot products.
The focus has shifted toward enterprise AI, agent orchestration, developer tools, and improving access and pricing to maintain its developer ecosystem.
ICE plans to hire nearly 30 contractors for around-the-clock monitoring of social media platforms to produce leads for enforcement operations.
The agency expects tight turnaround times (minutes to hours) on cases and wants to integrate AI into the surveillance workflow.
While contractors must operate under restraints (e.g. no faking profiles, all analysis on ICE servers), privacy advocates warn about mission creep and the risk to free speech.
Snapchat will begin charging users for storing their past photos and videos ("Memories") once they exceed 5 GB.
The announcement doesn’t yet specify the cost for UK users; pricing is expected to roll out gradually around the world.
This change has sparked user backlash, especially from longtime users who depend on Memories as a core feature.
Databricks set to accelerate agentic AI by up to 100x with ‘Mooncake’ technology — no ETL pipelines for analytics and AI
(Click 👆️ to read more)
Databricks acquired Mooncake to eliminate the need for traditional ETL pipelines by enabling direct transformation between transactional PostgreSQL data and analytics formats.
The new architecture (via “Moonlink”) can deliver performance gains of 10× to 100× or more for data movement and transformation workloads.
For enterprises building agentic AI applications, this reduces the infrastructure burden and speeds iteration by eliminating delays tied to data engineering.
Thinking Machines' first official product is here: meet Tinker, an API for distributed LLM fine-tuning
(Click 👆️ to read more)
Thinking Machines, founded by former OpenAI CTO Mira Murati, has released its first product: Tinker, a Python API for fine-tuning large language models.
Tinker gives users low-level control over training loops, loss functions, and data pipelines while the infrastructure handles distributed compute complexity.
It supports open weight models, LoRA tuning, and is already used by labs at Berkeley, Stanford, Princeton, and Redwood Research in the private beta.
If you find Top249 useful, consider sharing this with your friends or colleagues.
Till next time.