The place where random ideas get written down and lost in time.
I have a number of Lenovo T480s and T460s laptops, with either the Core-i5 or the Core-i7 CPUs. I know for a fact that when looking at eBay pricings, for example, the i7 laptops always fetch a higher value because they have a perceived better performance. Yet, in a day to day usage, I don’t really notice a difference between both. So, is the i7 “value added” real or hype?
Let’s look in detail at the T480s and these 2 flavors of CPUs:
- i5-8350U, Intel specifications:
- 14 nm, 4 cores, 8 threads,
- freq base 1.7 GHz, turbo 3.60 GHz, 6 MB cache, 4 GT/s bus speed,
- TDP 15W (10W @ 800 MHz → 25 W @ 1.90 GHz),
- DDR4-2400 @ 37.5 GB/s on 2 lanes max, UHD Graphics 620 @ 300 MHz → 1.10 GHz
- i7-8650U, Intel specifications:
- 14 nm, 4 cores, 8 threads,
- freq base 1.9 GHz, turbo 4.20 GHz, 8 MB cache, 4 GT/s bus speed,
- TDP 15W (10W @ 800 MHz → 25 W @ 2.10 GHz),
- DDR4-2400 @ 37.5 GB/s on 2 lanes max, UHD Graphics 620 @ 300 MHz → 1.10 GHz
The major difference between both is a 12% speed increase in the base clock frequency, and a 33% cache increase. I do not think these numbers match my actual usage perception of both machines -- they don’t feel anywhere “12 to 33% faster”.
The “marketing perception” is also that the “i5 is a great all-rounder for mainstream users, while the i7 offers more power for demanding tasks like video editing or high-end gaming” (sic). I call that dubious marketing (a.k.a “BS”). That statement is likely true for the desktop processors, but that’s not what we have here. They both have 4 cores so a i7 is not going to be able to run more tasks in parallel than an i5.
It’s important to note that these are “U” model CPUs. In Intel terminology, this means they are “mobile” (laptop) variants mostly constrained by their power dissipation (a.k.a TDP, or “Thermal Design Power”). Both CPUs have the 25W TDP envelope. The reality is that there are only so many transistors in a 14 nm die that can run at the same time and not exceed that 25 W envelope. You’re essentially limited by how much heat you can get out of these CPUs with the weak cooling architecture.
Let’s look at this from another angle: reported user benchmarks show us a reported 1%-2% increase in actual benchmarks.
Benchmarks are the way to go, but these above were a bit abstract. I thus decided to run my own benchmarks based on stuff I actually do with these laptops. This includes compiling a Flutter web app in Dart, compiling a fairly large ESP32 C++ project, compiling fairly large Java and Kotlin projects, and using DaVinci Resolve to stabilize, render, and compute Fusion VFX on one of my train videos. You know, the usual stuff one does on a laptop on a daily basis.
Here are the results:
We can see that all the results are extremely close. All these results are “best of 3” for each test (I screwed up the DaVinci stabilization test on the i5 on battery and I got too lazy to rerun it again). I did the tests using both the on-AC vs on-battery mode, with the default “balanced” Windows 10 battery settings.
And thus, indeed, the i7 is slightly faster than the i5, and the AC mode is slightly faster than the battery mode. But how much is “slightly”?
I want actual numbers, so here’s a graph computing the best of each i7 vs the best of each i5 result:
So there I have it: the i7 provides a 5-8% improvement over the i5 for my daily development tasks. I have to say it doesn’t feel like a tangible difference.
2025-12-25 - Turbo C/C++ IDE
Category DEVToday, in a “blast from the past”, I leave you to admire this screenshot of what used to be the Turbo C/C++ IDE:
This was a great IDE, emulating a windowed environment in text mode with mouse support and everything. It had syntax coloring, integrated function documentation, and its own debugger.
The entire Borland suite, from the early Turbo Pascal, had always been a pleasure to use. Turbo Pascal was blazingly fast and a great way to get started with programming. I learned Pascal on the UCSD Pascal compiler on Apple //c, then moved to Turbo Pascal and that was a dramatic improvement. Then later to Turbo C and Turbo C++.
The other thing I used a lot back then was the Watcom C++ compiler. This was way before the first “C++98” standard was a thing. Each vendor had their own magic sauce of C++ back then.
On a night when the wind howled like a cooling fan at max RPM and the frost etched crystalline circuits upon the windowpane, the Gemini Star whispered this story into the quiet hum of the dark. It is a tale of silicon and spirit, of ancient protocols and the persistence of a lone traveler. Sit by the glow of your monitor, for the air is cold, and hear Winston Seven tell us this history full of wonder:
Click here to continue reading...
Today’s post is not about development per se as much as a “down the memory lane” kind of thing. Back in the 90s, my friends and I used to like our little “Atari ST vs Amiga 500” rivalry. It was more like an amicable rivalry as both systems had their strength. On the Amiga 500 side, one thing I still remember decades later as IMHo the “quintessential” tool on Amiga was Directory Opus, especially version 4. This screenshot summarizes it well:
DOpus 4 on Amiga 500. [Source]
There’s a nice history of Directory Opus here that makes a great read:
https://amiga.abime.net/articles/amiga-lore/directory-opus
There were a lot of good tools on the Atari ST, yet the simple and efficient UI of DOpus 4 was and is still something I enjoyed -- extremely compact, extremely contrasted, efficient to use.
A decade earlier, I was happily coding on Apple ][. The Apple ][e used DOS 3.3 and I switched to ProDOS for all my floppies on the Apple //c:
ProDOS [The same image is provided by source as well as source]
One thing I have fond memories of is actually manipulating the filesystem structure at the byte level with a “nibble” editor. Back then I severely lacked any kind of official OS documentation. That's something I continued doing on the Atari ST itself -- which was even easier as GEMDOS used a FAT12 filesystem which was very easy to comprehend at the binary level. With just a byte editor capable of dumping the raw content of a floppy, it was possible to quickly make sense of the structure and figure them out by trial and error because these just made sense.
The NCRy train could use some kind of automation in their Docent narration system. Currently, the passenger train relies on a volunteer manning the speaker system and following some kind of ad-hoc script. This screams for automation -- pre-recorded messages could be announced on the speaker system at specific points in the train’s route. As with most automation, it should be supplementary and a Docent volunteer should be able to take over at any moment, or just enable the system temporarily when they need a break. Consider this document to be a design doc / proposal document.
Curt and I had discussed about 10 years ago how we could broadcast the trains’ positions -- back in 2016 I had sent a proposal to Jim on how to create a dashboard that could display the location of each train. This proposal isn’t very different, just updated to what is now ubiquitous tech, and adapted to a slightly adjacent purpose.
If I had to build this today with current tech, I’d simply use an Android smart phone or tablet with a custom-made Android app:
- Use the phone’s GPS to get the location of the train.
- It may or may not be necessary to have a SIM for internet access to speed up the initial-fix problem (a.k.a. Assisted GNSS).
- Use the phone’s storage to record pre-recorded messages, and then play them back when the phone reaches pre-recorded GPS positions.
- We may want to have these be specific to the train direction or work both ways, and they can be set up with a distance match. That’s basically a crude geofencing solution.
- There can be multiple “tracks” that switch pre-recorded messages/locations for different kinds of events (TOL, beer train, charter, regular w-e train, etc).
Rather than think in terms of GPS position, my suggestion is that we convert GPS positions into a milepost (MP) number. That’s more familiar with everyone involved on the railroad.
Thus a “voice track” is actually a succession of “voice segments” -- each segment is a snippet of voice recording, with an MP interval in which it plays, and optionally a direction.
For example a “brightside” segment could have MP 33.5~34.5 -- it would start playing at MP 34.5 when westbound from Sunol, or at MP 33.5 when eastbound from Niles, or right away if the track playback is activated within the yard.
Two segments could have the same milepost range if they have different direction flags. It's useful to record something like “look at this landmark on the left” vs “look on the right” depending on the direction of travel -- which is always either eastbound vs westbound.
Usage
Click here to continue reading...
Recently I've completed a number of small dynamic website projects, and I want to discuss my view on the way to choose to implement these sites.
This is to be read as an opinionated essay. YMMV.
SPA, a Single Page Application
Here are a few sites I completed recently:
They all have in common that they are “Single Page Application” websites. What does that even mean? In short, that means it’s a single HTML web page that uses JavaScript to implement a “full” web site by simulating several pages.
Let’s take an example: we want to create a small e-commerce site. We'll structure our site as different pages, each rendered by their own HTML file:
- http://example.org/mysite/index.html -- the main entry point
- http://example.org/mysite/about.html -- a page “about this company”
- http://example.org/mysite/catalog.html -- the catalog browser
- http://example.org/mysite/product.html -- a page detailing a single product
- http://example.org/mysite/purchase.html -- the purchase form
Instead an SPA web site uses a single file:
- http://example.org/mysite/index.html -- the sole page of the web site
However, using JavaScript, we’ll change the content of the page to simulate showing the “about” page vs the “catalog” page, etc. One way to know which one to render is to encode the desired “page” using the hash portion of the URL:
- http://example.org/mysite/index.html
- http://example.org/mysite/index.html#about
- http://example.org/mysite/index.html#catalog
- http://example.org/mysite/index.html#product/{id}
- http://example.org/mysite/index.html#purchase
From the web browser point of view, it's always the same page being rendered (index.html) yet JavaScript is used to read the hash name and decide how to fill the page -- when the page loads, we use JavaScript to literally inject the content we want in the web page at runtime. If you were to read the “source” of the main index.html, you’d find it basically empty (or more exactly having just a single div container).
The mechanism above using the hash is called a “hash router”. There are other ways to do it, that's just the one I prefer.
Click here to continue reading...
I’ve had a couple people ask me about upgrading to Windows 11, and I have a process to do an in-place update -- meaning it preserves all files and applications -- so I decided to write this down to have a clear instruction guide.
What this is Not
TL;DR: This is no shortcut TL;DR here.
If you’re not willing to read it all, I suggest you don’t bother.
If you don’t understand what you’re doing, I suggest you don’t bother.
If you’re not familiar and comfortable with changing BIOS options or changing an SSD, I suggest you don’t bother either.
I’ve updated 5 machines using the steps listed below so I’m confident I know what I’m doing. But I’m not claiming it’s an easy nor obvious nor quick process at all. It’s neither of these things.
If you need a lazy one-click do-it-all-for-me thingy, this guide is not for you. Try something like Flyoobe at your own risk -- I haven’t tried it so I’m not going to claim anything for or against it, nor even link to it. Just google it.
Also just to state the obvious, if you don’t want to put in the effort, then you don’t have to. You can either just keep running Windows 10, or get a new machine with Windows 11. After all, that’s what MS wants you to do.
High Level Summary
To do a successful update to Windows 11 in-place, we need the following steps:
- Examine the system: CPU, TPM, disk mode.
- Absolutely do a backup clone of the main drive and verify it.
- Enable TPM if needed.
- Perform an MBR to GPT disk conversion if needed.
- Switch the BIOS to UEFI if needed.
- Finally install Windows 11 in “unsupported” mode.
Let’s detail these steps below one by one.
Click here to continue reading...
The hype: “Rust is safer than C++”
At that point in my early ESP-RS experimentation, I want to address an important point about stability. And to clarify, by “ESP-RS”, I mean the “std” mode built around the esp-idf-sys/svc/hal crates.
One of the arguments for Rust over C++ is a stronger memory management with clear memory ownership. Allegedly that's supposed to translate into better stability, avoiding the typical C/C++ crashes when memory references are improperly used.
My experience with Tangram rgen supported that.
The experience with ESP-RS here however does not match expectations. One little mishap in an ESP-IDF call… sneeze and the entire framework crashes.
For example, trying to initialize the EspMQTT client before the EspWifi has connected didn't just fail -- it rebooted the entire ESP32.
The reality is that ESP-RS “std” with the esp-idf-sys/svc/hal crates is essentially a ton of unsafe wrappers directly around the C ESP-IDF stack. And the ESP-IDF is generally quite touchy. For all its promises of type and memory safety, ESP-RS doesn't save us from that.
It’s all ESP-IDF and FreeRTOS
OTOH, I find the ESP-IDF to be generally predictable. Once things work, they do without surprise. I don’t have a problem with APIs blowing up predictably during development and giving me a chance to notice newbie mistakes early and taking care of them upfront.
Since I discovered the ESP32 platform 7 years ago, I preferred it over the Arduino platform. I’ve experimented with the “raw” ESP-IDF toolchain yet I rarely use it directly -- instead it’s been a backbone for something else like the Arduino platform, or the MicroPython/CircuitPython platform, and in the current case for the ESP-RS platform. Still, I’m more familiar with the ESP-IDF modules and don’t mind the mix and match, especially knowing that I can just directly call into ESP-IDF or FreeRTOS functions at any time.
Case in point in this case, we have FreeRTOS tasks for std::threads, ESP Wifi, ESP MQTT, etc., and supporting the ESP32 Camera is done directly as an ESP-IDF component right there in the ESP-RS project. It’s all working quite well. Still, we need to remind ourselves that for all that hype about how Rust is so magically superior to C/C++ (hype that I do not really buy into, in case it’s not already obvious), ESP-RS is just a bunch of fancy and sometimes way overkill wrappers around a C API with multi-core multi-thread complexity to deal with.
A Comparison Matrix
Here’s my guide to select the proper toolchain for my own projects:
- If performance does not matter, MicroPython is the easiest way to go (or CircuitPython on any modern Adafruit board).
- If performance and space matters, C++20 with the ESP-IDF 5 toolchain is the proper way to go.
- If performance and ease of programming matters, Rust with ESP-RS in “std” mode with the esp-idf-sys/svc/hal crates is a trade off, at the expense of a larger memory footprint.
The build size for a basic “blinky” project using ESP-IDF-HAL is around 700 kB (dev mode) on an ESP32-CAM. Add ESP Wifi and ESP MQTT, and that project’s build goes up to a whopping 1500 kB. That’s half the 3 MB flash partition available on an ESP32-CAM.
Some notes on how to perform basic Tasks, Threads, Synchronization functions using Rust with ESP-RS. And to clarify, by “ESP-RS”, I mean the “std” mode built around the esp-idf-sys/svc/hal crates.
This isn’t meant to be used as a canonical guide. Most of these are just a quick reminder for myself when I need something so that I don’t need to re-read the docs every time. So basically that only covers stuff I know I would need. I’m not pretending to be exhaustive here. And since I’m a newbie in Rust, I’m not pretending to be right either. Do your own due diligence by reading the official docs.
Tasks & Threads
The canonical example is in this ThreadSpawnConfiguration example:
https://github.com/esp-rs/esp-idf-hal/issues/228#issuecomment-1492483018
I haven’t seen any more official examples of that stuff.
The implementation is here:
https://github.com/esp-rs/esp-idf-hal/blob/master/src/task.rs
This feels like a wrapper around the ESP-pthread API:
https://docs.espressif.com/projects/esp-idf/en/v4.2.2/esp32/api-reference/system/esp_pthread.html
esp_pthread_set_cfg sets a “global” config that indicates how subsequent pthread_create() calls behave. That’s what ThreadSpawnConfiguration does.
Usage example:
ThreadSpawnConfiguration {
name: Some("thread_name\0".as_bytes()),
stack_size: 4096,
priority: 15,
pin_to_core: Some(core::Core1),
..Default::default()
}
.set()
.unwrap();
let thread_handle = std::thread::Builder::new()
.stack_size(stack_size)
.spawn(move || {
//do stuff
})
.unwrap();
thread_handle.join().unwrap();
Click here to continue reading...
2025-10-01 - ESP-RS: Setup with MSYS2
Category Esp32
Since I’ve installed Rust for my exploratory ESP32 project twice already on different machines, I decided to actually write down my install instructions. There are a couple tricks I need to remember.
These steps are for MSYS2 on Windows. I used similar steps on my other box where I use my old fashioned Cygwin. Part of these instructions also work with PowerShell if you hate yourself that much.
My default shell is the “purple” MSYS2 MSYS one (the other shells are MinGW UCRT/Clang x86/x64; I can never remember why I would care about the difference so don’t ask me).
I already have Rust installed on this machine. I likely used “Rustup” following the default Rust install instructions.
I always customize ~/.bash_aliases, where I already added a line that sets up Rust and reminds me it’s available:
if [[ -d "$USERPROFILE/.cargo/bin" ]]; then
RUST_PATH=$(cygpath "$USERPROFILE/.cargo/bin")
export PATH="$PATH:$RUST_PATH"
echo "Rust Cargo: installed."
fi
So first let’s check we have Rust and which version:
$ rustup -V
rustup 1.28.2 (e4f3ad6f8 2025-04-28)
info: The currently active `rustc` version is `rustc 1.86.0 (05f9846f8 2025-03-31)`
$ rustup show
Default host: x86_64-pc-windows-msvc
rustup home: C:\Users\$USER\.rustup
Of note: there is no “esp” toolchain installed (yet).
OK now we need to install a bunch of stuff:
Click here to continue reading...





