Blog

The OCaml Planet RSS

Articles and videos contributed by both experts, companies and passionate developers from the OCaml community. From in-depth technical articles, project highlights, community news, or insights into Open Source projects, the OCaml Planet RSS feed aggregator has something for everyone.

Want your Blog Posts or Videos to Show Here?

To contribute a blog post, or add your RSS feed, check out the Contributing Guide on GitHub.

Serving This Article from RAM for Fun and No Real Benefit

This article is a kind of experience report of writing an HTTP server serving my website directly from memory, no file system involved. Just keep in mind: I am pretty that you should not try to reproduce this for your own little corner of the Internet, but I had a lot of fun.

25 Dec 2024

Thomas Letan’s Blog

Read Article
Multicore Property-Based Tests for OCaml 5: Challenges and Lessons Learned

We summarise the challenges and lessons learned in developing a test suite of property-based tests to help ensure the correctness of OCaml 5.

23 Dec 2024

Tarides

Read Article
Pragmatic Category Theory | Part 3: Associativity

Dmitrii Kovanikov's Personas Web Space

20 Dec 2024

Dmitrii Kovanikov

Read Article
Learn OCaml the Easy Way - Including the Hard Bits

Discover some of the best resources for learning OCaml, including tutorials, books, and events. Are you a larger group? Learn more about our OCaml courses too!

18 Dec 2024

Tarides

Read Article
Saturn 1.0: Data structures for OCaml Multicore

Announcing the 1.0 release of Saturn, a library of efficient, tested, concurrent data structures ready to use with OCaml 5.

11 Dec 2024

Tarides

Read Article
Building Machine Learning Systems for a Trillion Trillion Floating Point Operations

Over the last 10 years we've seen Machine Learning consume everything, from the tech industry to the Nobel Prize, and yes, even the ML acronym. This rise in ML has also come along with an unprecedented buildout of infra, with Llama 3 now reaching 4e25 floating point operations, or 40 yottaflops, or 40 trillion trillion floating point operations. To build these ML models, you need ML systems, like PyTorch. In this talk, Horace will (attempt to) answer: - How have ML systems evolved over time to meet the training needs of ML models? How does building ML systems differ from regular systems? - How do we get the most out of a single GPU? What's the point of compilers if we're just training a single model? - And what is the right way to think about scaling to 10s of thousands of GPUs?

09 Dec 2024

Jane Street - Tech Talks

View Video