Recent Events for foo.be MainPageDiary (Blog)

FeedCollection

hack.lu 2007

http://www.hack.lu/news.rdf returned no data, or LWP::UserAgent is not available.

adulau SVN

http://a.6f2.net/svnweb/index.cgi/adulau/rss/ returned no data, or LWP::UserAgent is not available.

Michael G. Noll

http://www.michael-noll.com/feed/ returned no data, or LWP::UserAgent is not available.

Justin Mason

2025-10-17

  • 11:23 UTC Obituary: Farewell to robots.txt (1994-2025)Obituary: Farewell to robots.txt (1994-2025) It is with deep sorrow that we announce the end of robots.txt, the humble text file that served as the silent guardian of digital civility for thirty years. Born on February 1, 1994, out of necessity when Martijn Koster’s server crashed under a faulty crawler named “Websnarf,” robots.txt passed away in July 2025, not by Cloudflare’s hand, but from the consequences of systematic disregard by AI corporations. The protocol taught us that technology can be based on human values like ethics and morality. It showed that voluntary compliance works when all parties benefit. Its greatest achievement was perhaps preserving the internet for three decades from what it has become today – a soulless extraction machine. Tags: internet history robots.txt crawlers web obituaries protocols ai via:mariafarrell
  • 11:05 UTC LOTOLOTO TIL about "LOTO" -- "Lock Out Tag Out". This is basically a physical mutex lock -- each worker has their own padlock which they attach to dangerous equipment in order to ensure that it can't be turned on (potentially killing someone) while it's being worked on; once they've completed the high-risk task, they then remove their own lock. Removing or damaging someone else's lock is considered an Extremely Big Deal and liable to get that person fired. Tags: loto mutex locks workplaces osha safety via:ChristinaB
  • 11:01 UTC Migrating to HetznerMigrating to Hetzner The Digital Society co-op migrated their (relatively small) infrastructure from AWS to Hetzner, mainly using k8s. One interesting detail is that Hetzner don't have the concept of an AZ, which is not a great sign in resiliency terms; if you need a high uptime, it is important to be able to run a multi-AZ service which operates with several replicas spread across independent datacenters which are more-or-less colocated, within a few milliseconds of each other. Azure, AWS, and GCP all offer this concept, but not Hetzner. hmm Tags: hetzner uptime k8s aws migration cloud infrastructure ops

2025-10-16

  • 08:41 UTC nanochat: The best ChatGPT that $100 can buynanochat: The best ChatGPT that $100 can buy This is really impressive, both as a small-scale from-scratch rebuild of a modern LLM, and as a well-written walkthrough of the training process for a large language model. 4 hours, $92, and you wind up with a relatively functional tiny LLM! Very cool. Tags: llms machine-learning ml chat nanochat training hacks

2025-10-14

  • 13:42 UTC RetroHax: PS2 Fixing FrenzyRetroHax: PS2 Fixing Frenzy wow! extremely detailed -- with copious photos -- process of restoring classic Playstation 2 consoles. Worth it for great photos of repair and restoration of decades-old hardware, which is good advice for the next hardware repair job I need to do Tags: ps2 playstation repair restoring restoration gaming retrocomputing

2025-10-06

  • 10:09 UTC OSWALDOSWALD "OSWALD is a Write-Ahead Log (WAL) design built exclusively on object storage primitives. It works with any object storage service that provides read-after-write consistency and compare-and-swap operations, including AWS S3, Google Cloud Storage, and Azure Blob Storage. The design supports checkpointing and garbage collection, making it suitable for State Machine Replication (SMR) [and] has been formally specified and verified using the P programming language." - by Nicolae Vartolomei Tags: oswald wal object-storage s3 gcs azure smr storage formal-methods design architecture cloud-computing

2025-09-29

  • 10:58 UTC LLM Observability in the Wild – Why OpenTelemetry should be the StandardLLM Observability in the Wild - Why OpenTelemetry should be the Standard OTel is generally ahead in terms of how code meets metrics, nowadays, as far as I can see. Works for me Tags: otel observability opentelemetry llms ai coding
  • 10:06 UTC Google just erased 7 years of our political historyGoogle just erased 7 years of our political history "Google appears to have deleted its political ad archive for the EU; so the last 7 years of ads, of political spending, of messaging, of targeting - on YouTube, on Search and for display ads - for countless elections across 27 countries - is all gone. We had been told that Google would try to stop people placing political ads, a "ban" that was to come into effect this week. I did not read anywhere that this would mean the erasure of this archive of our political history." Tags: google advertising ads politics ireland eu europe youtube elections history

2025-09-24

  • 10:46 UTC To make AI safe, we must develop it as fast as possible without safeguardsTo make AI safe, we must develop it as fast as possible without safeguards lol: As the leader of an AI company which stands to benefit enormously if I convince enough investors that AGI is inevitable, it’s clear to me that AGI is inevitable. But developing superintelligence safely is a complex process. It would take time and require difficult discussions — discussions that everyone in society should have a say in, not just the small number of researchers working on it. If we pursue that path, there's a real risk that somebody else will make AGI first and destroy all human life before we have a chance to ourselves. That would be unacceptable. To stop bad actors developing AGI that could kill us all, we need good actors to develop AGI that could also kill us all. I've come to realise that our best hope is to race at breakneck speed towards this terrifying, thrilling goal, removing any safeguards that risk slowing our progress. Once we've unleashed the technology's full destructive power, we can then adopt a "stable door" approach to its regulation and control — after all, that approach has worked beautifully for previous technologies, from fossil fuels to microplastics. Tags: agi ai-safety satire funny comedy tech future

2025-09-23

  • 15:03 UTC AI-Generated “Workslop” Is Destroying ProductivityAI-Generated “Workslop” Is Destroying Productivity "Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers: We define workslop as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task. [...] Each incidence of workslop carries real costs for companies. Employees reported spending an average of one hour and 56 minutes dealing with each instance of workslop. Based on participants’ estimates of time spent, as well as on their self-reported salary, we find that these workslop incidents carry an invisible tax of $186 per month. For an organization of 10,000 workers, given the estimated prevalence of workslop (41%), this yields over $9 million per year in lost productivity. Respondents also reported social and emotional costs of workslop, including the problem of navigating how to diplomatically respond to receiving it, particularly in hierarchical relationships. When we asked participants in our study how it feels to receive workslop, 53% report being annoyed, 38% confused, and 22% offended. The most alarming cost may be interpersonal. Low effort, unhelpful AI generated work is having a significant impact on collaboration at work. Approximately half of the people we surveyed viewed colleagues who sent workslop as less creative, capable, and reliable than they did before receiving the output. Forty-two percent saw them as less trustworthy, and 37% saw that colleague as less intelligent. Tags: productivity career ai work workslop code-review slop

Paul Graham