Uncategorized

Relatively Universal ROM Programmer Makes Retro Tech Hacking Accessible

There’s treasures hidden in old technology, and you deserve to be able to revive it. Whether it’s old personal computer platforms, vending machines, robot arms, or educational kits based on retro platforms, you will need to work with parallel EEPROM chips at some point. [Anders Nielsen] was about to do just that, when he found out that a TL866, a commonly used programmer kit for such ROMs, would cost entire $70 – significantly raising the budget of any parallel ROM-involving hacking. After months of work, he is happy to bring us a project – the Relatively Universal ROM Programmer, an open-source parallel ROM programmer board that you can easily assemble or buy.
Designed in the Arduino shield format, there’s a lot of care and love put into making this board as universal as reasonably possible, so that it fits any of the old flash chips you might want to flash – whether it’s an old UV-erasable ROM that wants a voltage up to 30 V to be written, or the newer 5 V-friendly chips. You can use ICs with pin count from 24 to 32 pins, it’s straightforward to use a ZIF socket with this board, there’s LED indication and silkscreen markings so that you can see and tweak the programming process, and it’s masterfully optimized for automated assembly.
You can breadboard this programmer platform as we’ve previously covered, you can assemble our own boards using the open-source files, and if you don’t want to do either, you can buy the assembled boards from [Anders Nielsen] too! The software is currently work in progress, since that’s part of the secret sauce that makes the $70 programmers tick. You do need to adjust the programming voltage manually, but that can be later improved with a small hardware fix. In total, if you just want to program a few ROM chips, this board saves you a fair bit of money.

[embedded content] […]

Uncategorized

MXM: Powerful, Misused, Hackable

Today, we’ll look into yet another standard in the embedded space: MXM. It stands for “Mobile PCI Express Module”, and is basically intended as a GPU interface for laptops with PCIe, but there’s way more to it – it can work for any high-power high-throughput PCIe device, with a fair few DisplayPort links if you need them!
You will see MXM sockets in older generations of laptops, barebones desktop PCs, servers, and even automotive computers – certain generations of Tesla cars used to ship with MXM-socketed Nvidia GPUs! Given that GPUs are in vogue today, it pays to know how you can get one in low-profile form-factor and avoid putting a giant desktop GPU inside your device.
I only had a passing knowledge of the MXM standard until a bit ago, but my friend, [WifiCable], has been playing with it for a fair bit now. On a long Discord call, she guided me through all the cool things we should know about the MXM standard, its history, compatibility woes, and hackability potential. I’ve summed all of it up into this article – let’s take a look!
This article has been written based on info that [WifiCable] has given me, and, it’s also certainly not the last one where I interview a hacker and condense their knowledge into a writeup. If you are interested, let’s chat!

Simple Wireup, Generous Payoff
Yes, an Intel A380m card in MXM format
An MXM card has a whole side dedicated to its gold finger PCB edge connector. With 285 pins, there are a whole lot of interfaces you can get out of these, and all of them are within hobbyist reach! To make an MXM card work, you don’t need much, either.
For an MXM card to work, first, you need to be able to provide between 60 W and 100 W of power, with the ability to impose a power consumption limit on the card. The standard says that the voltage can be anywhere from 7 V to 20 V. This is obviously intended for laptop use, where the main power rail can either be at charger voltage or battery voltage, and it results in high efficiency – you don’t need a separate buck-boost regulator for, say, 12 V.
Then, you need a PCIe link of up to 16x, but because PCIe is cool like that, even a 1x link will work as long as you won’t be sad if the GPU is bottlenecked by it. You also might need to set up a few control GPIOs, like the card enable pin, and the power limit pin that tells the card whether it should run in lower-power mode or not. Plus, for some cards, you might need to give the card 5 V at an amp or two – the standard requires that, but it’s not clear why. Technically, you can even connect an MXM card to a Raspberry Pi 5 or CM4, as long as you can procure enough power from some external source – if you want a low-footprint GPU paired with a Pi, MXM makes that firmly within your reach.
In return, you get a wide array of interfaces. The coolest part is, undoubtedly, DisplayPort. You can get up to six 4-lane DP links out of an MXM card, as long as the GPU chip is okay with it. You might also be able to get VGA, LVDS, and even HDMI/DVI. MXM GPUs do support DP++, a DisplayPort mode that outputs HDMI-compatible signals, and you only need a few external components.
You also get a good few low-level interfaces, both for practical and debug purposes. Need to control a small fan? There’s a PWM output you might be able to use for fan control, and a tach signal input! Backlight control for an LCD panel you’ve wired up? There’s PWM for that too. Want to poke at the GPUs’ JTAG? The MXM socket has pins defined for that. It’s up to the cards to support or not support a lot of stuff that the MXM standard defines, so you might still benefit from a small MCU, but having those things seriously helps in embedded applications.
Speaking of JTAG and vendor freedom, of course, there are OEM pins – since anyone can produce MXM GPUs and systems, and the MXM standard has lasted for decades now, manufacturers like to put their own spin on them. You can often figure things out from MXM-equipped laptop schematics, and, sometimes it’s necessary to check a few. See, giving freedom to individual implementers is a double-edged sword, and MXM is an outstanding illustration on how modular standards can go wrong for regular users.
Compatible, Mostly
Looking at MXM, you might rejoice – thinking about upgrading and repairing your laptop well beyond the few years that the warranty period covers. However, manufacturers are not exactly interested in that. For them, the incentive structure for using MXM is usually completely different.
For a start, producing a board with five BGAs can in certain cases be easier than producing a board with fifteen, which is what you often have to do if you have to put a GPU and RAM on your board as opposed to an MXM module. And, for offering multiple GPU configurations of the same model in a way that lets the manufacturer cover multiple points on the supply-demand chart, it might just be easier to produce an array of MXM cards and then pair them to an array of GPU-less mainboards that have their own configurations. Not always – which is part of why you don’t see it lately.
This is not a standard-defined shape for an MXM card.
So, while you might like upgradability and repairability, you might find that MXM GPUs are not often offered as replacement parts for sale. And, what’s worse, if you’ve found an MXM card available for a different laptop, there’s no guarantee it will fit.
For instance, some cards are of the MXM 3.0 standard, while others are MXM 3.1, with slight but important differences like support for two DP ports on LVDS pins. However, most of the real-world differences are from either lack of standardization or from manufacturers straight up ignoring the standard.
The first hurdle is the most obvious, and that is the mechanical footprint. The MXM standard defines two possible card shapes, A variant or B variant, including things like heatsink and retention screw hole layout, and even component height for heatsink compatibility purposes. Many laptop manufacturers ignore these rules, producing cards of wacky shapes, or worse, shapes that almost match but are slightly incompatible in a subtle but severe way.
Then, there’s the VBIOS and driver problems. Many MXM cards have an onboard BIOS chip, whereas other cards rely on the laptop to feed them their BIOS during boot. If your card is of the latter type, you might need to add a UEFI module or hack the code. Alternatively, some cards ship with unpopulated flash chip footprints or unflashed chips on them, so you can give a BIOS to your card with a bit of soldering and flashing, as long as you can find an image that works.
As for drivers, Nvidia stands out there. Many Windows Nvidia drivers for MXM cards run hardware checks that tie the MXM cards to hardware IDs of laptops, and refuse to install the drivers if the card is installed in a laptop it was not expected to be installed in. You used to be able to work around it, but nowadays the driver signing mechanism severely limits the things you can do, a mechanism that in Windows has no sane leeway for user-tweaked drivers and, as such, acts as an effective way of proprietary vendor lock-in. So, if you want to upgrade your Nvidia MXM card and you run Windows, you might run into a bit of a brick wall.
Some Outright Hostile
Continuing this line of reasoning, there are slots that look like MXM but aren’t MXM, and I’m not talking about SMARC, which is a fun SoM standard reusing MXM slots, just like Pi Compute Modules reuse DDR sockets. No, I’m talking about manufacturers like Lenovo, who have added MXM socketed GPUs into some of their more recent laptops, but with completely different pinouts. They don’t advertise their slots as MXM, at least, which is a bonus.
Where are the power pins? Who knows!
Still, these cards are easy to confuse for actual MXM, and they fit into the slot all the same. The most firey factor is the power pin layout – a mindboggling change that has been made on some laptop models that can destroy your card and laptop even if the card fits mechanically. On one side of the MXM card, there’s an array of power pins – a matching amount of VIN and GND, often visible as a single large gold finger. For some unimaginable reason, a few manufacturers have made cards that remap the entire pinout and specifically put those power pins on the opposite side.
The pinout swapping is bad enough, but it’s the power pin swapping that really gets us, and gets every piece of tech involved to release the magic smoke, too. And then, there’s the few outright criminal cases where manufacturers have put power pins on both sides of the pinout. You can easily notice this when you look at your card, but you have to know to look out for it.
The MXM standard can’t prevent most of these problems, and whatever it tries to limit, laptop manufacturers can freely bypass. There’s no certification or compliance checks; fundamentally, in laptops, MXM isn’t used for your convenience – it’s used for the convenience of the manufacturer. If you look at your old MXM-equipped laptop and think that you might be able to upgrade its GPU, remember that there’s more than meets the eye.
All of these things, of course, don’t mean that you can’t hack on MXM otherwise. Just remember that, whatever you build, might be more specific to a certain breed of MXM slots in certain laptop lineups, than to MXM as a standard.
Still Hackable Anyway
How about a few good MXM hacks to show you what you can do? Remember, fundamentally, MXM is a high-power connection with a high-bandwidth PCIe link on it, which lets you pull some wonderful tricks!
For instance, here’s an MXM adapter for certain kinds of iMacs, that lets you install an NVMe SSDs into the MXM slot of your trusty iMac while preserving the MXM GPU connections! It involves changing a chipset strap to enable bifurcation, so there’s no power-hungry PCIe switch involved, and going from x16 to x8 on your MXM GPU won’t involve any notable bandwidth loss either. So, you can replace your SATA HDD or SSD with a speedy modern NVMe drive, that probably is way cheaper too!
It wouldn’t be hard to make a generic MXM to NVMe adapter, in general – and, [WifiCable] has a template KiCad project for you. Just like with mPCIe and M.2 cards, an MXM card is a PCB, after all, 1.2mm thick. You might be worried about leaving your laptop GPU-less, but many laptops with MXM cards still have an iGPU that is enabled whenever the MXM card is removed, though, that’s not a guarantee. We might see an MXM to Oculink adapter too, at some point!
There are also a few adapters to reuse MXM cards on the market, cheap and expensive alike. That kind of adapter is good for checking any MXM cards you have laying around, and on the cheap ones, you might even be able to solder the extra HDMI port on, as long as you get 5 V from somewhere. Sadly, none of them are open-source – yet.

This is an MXM tinkering adapter board from [WifiCable], exposing as much of MXM as humanly possible, with a wide range of power input options. Every single option is on either pin headers or SMD resistors, able to satisfy whichever obscure feature an MXM card might need, and tap at interfaces that manufacturers don’t expect you to tap. It’s a decently complex design, still yet to be polished, and it’s a 6-layer board big enough to go over a good few price breaks for any PCB fab – we’ve both learned a ton about high-speed design as [WifiCable] went about it. However, when it comes to playing with different MXM cards, exploring manufacturer differences and tinkering with card compatibility, this is as good of a testbench board as anyone can build!
Want to build your own MXM stuff, whether cards or card-carrying PCBs? Here’s a socket on LCSC, and with easyeda2kicad, you can easily get a footprint and 3D model for it. As for designing your own card or getting the [generic] pinout, you can find the MXM standard by looking up MXM_Specification_v31_r10.pdf.
Gone But Not Forgotten
DGFF card
Sadly, with the trend of making laptops thinner, we’ve been losing MXM, and the companies involved in defining the standard have not been all that interested in updating it, or even adhering to it for that matter. Nevertheless, due to industrial use of MXM, you can still find many modern cards in MXM format!
Furthermore, the spirit of MXM lives on. The proprietary DGFF standard is superseding MXM in Dell laptops – it’s thinner, and it’s fundamentally the same functionality that MXM provides. The same goes for the Framework 16 expansion bay modules – you could easily make an MXM to expansion bay card, and, [WifiCable] has made a KiCad sketch of one too!
For now, we still have laptops with MXM and almost-MXM cards around, and if you ever look into tinkering with those, you now have a better roadmap towards that. Despite the prevalence of soldered-on GPUs in laptops, the concept of GPU modules isn’t about to die out, and companies still put “GPU module” on the whiteboards every now and then during their product design processes. […]

Uncategorized

Human-Interfacing Devices: HID over I2C

In the previous two HID articles, we talked about stealing HID descriptors, learned about a number of cool tools you can use for HID hacking on Linux, and created a touchscreen device. This time, let’s talk about an underappreciated HID standard, but one that you might be using right now as you’re reading this article – I2C-HID, or HID over I2C.
HID as a protocol can be tunneled over many different channels. If you’ve used a Bluetooth keyboard, for instance, you’ve used tunneled HID. For about ten years now, I2C-HID has been heavily present in laptop space, it was initially used in touchpads, later in touchscreens, and now also in sensor hubs. Yes, you can expose sensor data over HID, and if you have a clamshell (foldable) laptop, that’s how the rotation-determining accelerometer exposes its data to your OS.
This capacitive touchscreen controller is not I2C-HID, even though it is I2C. By [Raymond Spekking], CC-BY-SA 4.0Not every I2C-connected input device is I2C-HID. For instance, if you’ve seen older tablets with I2C-connected touchscreens, don’t get your hopes up, as they likely don’t use HID – it’s just a complex-ish I2C device, with enough proprietary registers and commands to drive you crazy even if your logic analysis skills are on point. I2C-HID is nowhere near that, and it’s also way better than PS/2 we used before – an x86-only interface with limited capabilities, already almost extinct from even x86 boards, and further threatened in this increasingly RISCy world. I2C-HID is low-power, especially compared to USB, as capable as HID goes, compatible with existing HID software, and ubiquitous enough that you surely already have an I2C port available on your SBC.
In modern world of input devices, I2C-HID is spreading, and the coolest thing is that it’s standardized. The standardization means a lot of great things for us hackers. For one, unlike all of those I2C touchscreen controllers, HID-I2C devices are easier to reuse; as much as information on them might be lacking at the moment, that’s what we’re combating right now as we speak! If you are using a recent laptop, the touchpad is most likely I2C-HID. Today, let’s take a look at converting one of those touchpads to USB HID.
A Hackable Platform

Two years ago, I developed a Framework laptop input cover controller board. Back then, I knew some things about I2C-HID, but not too much, and it kinda intimidated me. Still, I wired up the I2C pins to an I2C port on an RP2040, wired up the INT pin to a GPIO, successfully detected an I2C device on those I2C pins with a single line of MicroPython code, and left sitting on my desk out of dread over converting touchpad data into mouse events – as it turns out, it was way simpler than I thought.
There’s a specification from Microsoft, and it might be your first jumping point. I tried reading the specification, but I didn’t understand HID at the time either, so that didn’t help much. Looking back, the specification is pretty hard to read, regardless. Here’s the deal in the real world.
If you want to get the HID descriptor from an I2C-HID device, you only need to read a block of data from its registers. Receiving reports (HID event packets) is simple, too. When the INT pin goes low, read a block of data from the device – you will receive a HID report. If there’s an RST pin, you will want to bring it down upon bootup for a few hundred milliseconds to reset the device, and you can use it in case your I2C-HID device malfunctions, too.
Now, there are malfunctions, and there definitely will be quirks. Since HID is ubiquitous, there are myriad ways for manufacturers to abuse it. For instance, touchpads are so ubiquitous that Chrome OS has entire layers dealing with their quirks. But here we are, and I have an I2C device connected to an RP2040, previous MicroPython I2C work in hand, some LA captures between the touchpad and the original system stashed away, and I’m ready to send it all commands it needs.
Poking And Probing
To read the descriptor, you can read a block from register 0x20, where the first four bytes define the descriptor version and the descriptor length – counting these four bytes in. When we put this descriptor into the decoder, we will get something like this:
[…]0x05, 0x0D, // Usage Page (Digitizer)0x09, 0x05, // Usage (Touch Pad)0xA1, 0x01, // Collection (Application)0x85, 0x01, // Report ID (1)0x05, 0x0D, // Usage Page (Digitizer)0x09, 0x22, // Usage (Finger)0xA1, 0x02, // Collection (Logical)0x09, 0x47, // Usage (Confidence)0x09, 0x42, // Usage (Tip Switch)0x15, 0x00, // Logical Minimum (0)0x25, 0x01, // Logical Maximum (1)[…]
That is a HID descriptor for a touchpad alright! Save this descriptor somewhere – while getting it dynamically is tempting, hardcoding it into your firmware also might be a viable decision, depending on which kind of firmware you’ll be adding I2C-HID support into, and, you’ll really want to have it handy as a reference. Put this descriptor into our favourite decoder website, and off we go! Oh, and if you can’t extract the descriptor from the touchpad for whatever reason, you can get it from inside a running OS like I’ve done in the last article – that’s what I ended up doing, because I couldn’t make MicroPython fetch the descriptor properly.
For some reason, Microsoft decided to distribute this spec as a .docx file, something that I immediately abused as a way of stress relief
Take a look at the report IDs – they can be helpful later. All reports coming from the touchpad will have their report ID attached, and it’s good to know just which kinds of events you can actually expect. Also, here’s a challenge – try to spot the reports used for BIOS “simple mouse” functionality, firmware update, touchpad calibration, and any proprietary features!
Now, all that’s left is getting the reports. This is simple too – you don’t even need to read a block from a register, just a block of data from the touchpad. First, you read a single byte, which tells you how many more bytes you need to read to get the actual packet. Then you read a byte once INT is asserted (set low). That means the touchpad has data for you. If your INT doesn’t work for some reason, as it was on my board, you could continuously poll the touchpad in a loop instead, reading a single byte each time, and reading out a full packet when the first byte isn’t 0x00. Then, it’s the usual deal – first byte is the report ID, and all other bytes are the actual report contents. For I2C code of the kind that our last article uses, reading a report works like this:

while True:
try:
l = i2c.readfrom(0x2c, 1)[0]
if l:
d = i2c.readfrom(0x2c, l)
if d[2] != 0x01:
# only forward packets with a specific report ID, discard all others
print(“WARNING”)
print(l, d)
print(“WARNING”)
else:
d = d[3:]
print(l, len(d), d)
usb_hid.report(usb_hid.MOUSE_ABS, d)
except OSError:
# touchpad unplugged? retry in a bit
sleep(0.01)

Now, touch the touchpad, and see. Got a report? Wonderful! Haven’t received anything yet? There are a few things to check. First, your touchpad might require a TP_EN pin to be asserted low or high. Also, if your touchpad has a TP_RST pin, you might need to pull it low on startup for a couple hundred milliseconds. Other than that, if your touchpad is from a reasonably popular laptop, see if there’s any references for its quirks in the Linux kernel, or any of the open firmwares out there.
Further Integration
Theoretically, you could write a pretty universal I2C-HID to USB-HID converter seriously easily – that would allow things like USB-connected touchpads on the cheap, just like some people have been doing with PS/2 in the good old days. For me, there’s an interesting question – how do you actually integrate this into a keyboard firmware? There are a few options. For instance, you could write a QMK module for dealing with any sort of I2C-HID device, that’d pass through reports from the touchpad and generate its own reports for keyboard reports. That is a viable option for most of you; for me, C++ is not my friend as much as I’d like it to be.
There’s the MicroPython option we’ve explored last article, and that’s what I’m using for forwarding at the moment. This option needs the descriptor translated into TUSB macros, which took a bit of time, but I could make it work. Soon, USB device support will be added into the new MicroPython release, which will make my translation work obsolete in all the best ways, but it isn’t merged just yet. More importantly, however, there’s no stock keyboard code I could find that’s compatible with this firmware, and as much as it could be educational, I’m not looking into writing my own keyboard scanning code.
Currently, I’m looking into a third option, KMK. A CircuitPython-based keyboard firmware, it should allow things like dynamic descriptor definitions, which lets us save a fair bit of time when iterating on descriptor hacking, especially compared to the MicroPython fork.
All of these options need you to merge keyboard and touchpad descriptors into one, which makes sense. The only caveat is the question of conflicting report IDs between the stock firmware keyboard descriptor and the stock touchpad descriptor. For fixing that, you’d want to rewrite report IDs on the fly – not that it’s complicated, just a single byte substitution, but it’s a good caveat to keep in mind! My touchpad code already does this because the library does automatic report ID insertion, but if yours doesn’t, make sure they’re changed.
Even Easier Reuse
Now, all of this was about tunneling I2C-HID-obtained HID events into USB. Are you using something like a Raspberry Pi? Good news! There’s i2c-hid support in Linux kernel, which only really wants the IRQ GPIO and the I2C address of your I2C device. Basically, all you need to do is to add a device tree fragment and some very minimal data. I don’t have a tutorial for this, but there’s some initial documentation in the kernel tree, and grepping the device tree directory for the overlay name alone should give you a wonderful start.
This article isn’t long, and that’s because of just how easy I2C-HID is to work with. Now, of course, there are quirks – just check out this file for some examples. Still, it’s nothing that you couldn’t figure out with a logic analyzer, and now you can see just how easy this is. I hope that this can help you on your hacking forays, so whenever you next see a laptop touchpad, you know just how easy they can be to wire up, no matter if you’re using a microcontroller or a Raspberry Pi. […]

Uncategorized

A ROG Ally Battery Mod You Ought To Try

Today’s hack is an unexpected but appreciated contribution from members of the iFixit crew, published by [Shahram Mokhtari]. This is an ROG Ally Asus-produced handheld gaming console mod that has you upgrade the battery to an aftermarket battery from an Asus laptop to double your battery life (40 Wh to 88 Wh).
There are two main things you need to do: replace the back cover with a 3D printed version that accommodates the new battery, and move the battery wires into the shell of an old connector. No soldering or crimping needed — just take the wires out of the old connector, one by one, and put them into a new connector. Once that is done and you reassemble your handheld, everything just works; the battery is recognized by the OS, can be charged, runs the handheld wonderfully all the same, and the only downside is that your ROG Ally becomes a bit thicker.

The best part is, it’s hard to fail at applying this mod, as it’s documented to the high standards we’d expect from iFixit. The entire journey is split into detailed steps, there’s no shortage of pictures, and the group has also added warnings for the few potentially problematic aspects you want to watch out for. Plus, in the comment section, we’ve learned that there’s an entire community called AllyMods dedicated to ROG Ally modding that has spawned creations like the dual display mod, which is a joy to see!
This mod reminds us of the time someone modified a Nintendo Game Boy Advance SP with a thicker shell too, not just extending the battery, but also adding things like Bluetooth and 3.5 mm audio, USB-C and wireless charging. A worthy upgrade for a beloved device! […]

Uncategorized

Logic Analyzers: Decoding And Monitoring

Last time, we looked into using a logic analyzer to decode SPI signals of LCD displays, which can help us reuse LCD screens from proprietary systems, or port LCD driver code from one platform to another! If you are to do that, however, you might find a bottleneck – typically, you need to capture a whole bunch of data and then go through it, comparing bytes one by one, which is quite slow. If you have tinkered with Pulseview, you probably have already found an option to export decoded data – all you need to do is right-click on the decoder output and you’ll be presented with a bunch of options to export it. Here’s what you will find:
2521888-2521888 I²C: Address/data: Start2521896-2521947 I²C: Address/data: Address write: 222521947-2521954 I²C: Address/data: Write2521955-2521962 I²C: Address/data: ACK2521962-2522020 I²C: Address/data: Data write: 012522021-2522028 I²C: Address/data: ACK2522030-2522030 I²C: Address/data: Start repeat2522038-2522089 I²C: Address/data: Address read: 222522089-2522096 I²C: Address/data: Read2522096-2522103 I²C: Address/data: ACK2522104-2522162 I²C: Address/data: Data read: 912522162-2522169 I²C: Address/data: NACK2522172-2522172 I²C: Address/data: Stop
Whether on the screen or in an exported file, the decoder output is not terribly readable – depending on the kind of interface you’re sniffing, be it I2C, UART or SPI, you will get five to ten lines of decoder output for every byte transferred. If you’re getting large amounts of data from your logic analyzer and you want to actually understand what’s happening, this quickly will become a problem – not to mention that scrolling through the Pulseview window is not a comfortable experience.
The above output could look like this: 0x22: read 0x01 ( DEV_ID) = 0x91 (0b10010001). Yet, it doesn’t, and I want to show you how to correct this injustice. Today, we supercharge Pulseview with a few external scripts, and I’ll show you how to transfer large amounts of Sigrok decoder output data into beautiful human-readable transaction printouts. While we’re at it, let’s also check out commandline sigrok, avoiding the Pulseview UI altogether – with sigrok-cli, you can easily create a lightweight program that runs in the background and saves all captured data into a text file, or shows it on a screen in realtime!
Oh, and while we’re here, I’d like to show you a pretty cool thing I’ve found on Aliexpress! These are tiny FX2 boards with the same logic analyzer schematic, so they work with the FX2 open-source firmware and Sigrok – but they’re much smaller, have USB-C connectors instead of cable struggle that is miniUSB, and are often even cheaper than the ‘plastic case’ FX2 analyzers we’ve gotten used to. In addition to that, since you can see the exposed PCB, unlike with the ‘plastic case’ analyzers, you know whether you’re getting input buffers or not!
Boiling It Down
As an example, let’s consider a capture of the I2C bus of the Pinecil soldering iron. On this bus, there’s three I2C devices – a 96×16 OLED screen at the address 0x3c, an accelerometer at 0x18, and the FUSB302B USB-PD PHY at 0x22. The FUSB302B is a chip that we remember from the USB-C low-level PD communication articles where we built our own PD trigger board. I could only have written those articles because I got the logic analyzer captures, processed them into transaction printouts, and used those to debug my PD code – now, you get to learn how to use such captures for your benefit, too.
If you open the above files in Pulseview – you will see a whole bunch of I2C traces. I wanted to zone in on the FUSB302, naturally – accelerometer and OLED communications are also interesting but weren’t my focus. You will also see that there’s a protocol decoder called “I2C filter” attached. Somehow, it’s been remarkably useless for me whenever I try to use it, not filtering out anything at all. No matter, though – right click on the I2C decoder output row (the one that shows decoded bytes and events), click “Export all annotations for this row”, pick a filename, then open the file in a text editor.
The view you get is a bit overwhelming – we get 22,000 lines of text, which is nowhere near the kind of data you could feasibly read through. Of course, most of that is LCD transfer data, and there’s a fair bit of accelerometer querying, too – you want to filter out both of these if you want to only see the FUSB302 transactions. Nevertheless, it’s a good start – you get a text file that contains all the activity happening on the I2C bus, it’s just too much text to read through on your own.
Here’s an example line: 2521783-2521834 I²C: Address/data: Address write: 30. This is very easy to process, if you take a closer look at it!  Each line describes an I2C event, and it starts with two timestamps – event start and event end, separated by – . Then, we get three more values, separated by spaces – decoder name, decoder event type, and the decoder event itself. This output format can be changed in Pulseview settings, if you’re so inclined, however, you can easily parse it as-is. For this format, we can simply split the string by space (not splitting further than three spaces), getting a timestamp, decoder name, decoder output type and decoder event.
I’ll be using Python for parsing, but feel free to translate the code into anything that works for you. Here’s a bit of Python that reads our file line-by-line and puts the useful parts of every line into variables:

with open(‘decoded.txt’, ‘r’) as f:
line = f.readline()
while line:
line = line.strip()
if not line: # empty line terminates the loop, use `continue` to ignore empty lines instead
break
# ignoring decoder name and decoder output type – they don’t change in this case
tss, _, _, d = line.split(‘ ‘, 3)
[ “do something with this data” ]
line = f.readline() # get a new line and rerun the loop body

Parsing lines of text into event data is simple enough – from there, we need to group events into I2C transactions. As you can see, a transaction starts with a Start event, which we can use as a marker to separate different transactions within all the events we get. We can do the usual programming tactic – go through the events, have one “current transaction” list that we add new events to, and an “all transactions so far” list where we put transactions we’ve finished processing.
The plan is simple – in the same loop, we look at the event we get, and if it’s not a Start event, whether it’s a write/read/ACK/NACK bit event, or Stop/Start repeat event, we simply put it into the “current transaction” list. If we get a new Start event, we consider this “current transaction” list finished and add it to our list of received transactions, then start a new “current transaction” list. While we’re at it, we can also parse address and data bytes – we receive them as strings and we need to parse them as hex digits, unless you change the I2C decoder to output something else.
Here’s a link to the relevant code section. I could talk more about what it does, for instance, it filters out the FUSB302 transfers by the address, but I’d like to cut to the chase and show the input lines compared to the output transaction list. You can get this output if you run python -i parse.py and enter tr[0] in the REPL:

>>> tr[0]
[‘start’, 34, ‘wr’, ‘ack’, ‘wr’, 1, ‘ack’, ‘start repeat’, 34, ‘rd’, ‘ack’, ‘rd’, 145, ‘nack’, ‘stop’]

Now, this is a proper I2C transaction! All of these elements are things we can visually discern in the Pulseview UI. Mind you, this code is tailored towards the FUSB302 transaction parsing, but it should not be hard to modify it so that it singles out and parses accelerometer or OLED transactions instead. From here, it’s almost enough to simply concatenate the transaction list elements and get a semi-human-readable transaction, but let’s not stop our ambitions here – the FUSB302 has documentation available, and we can get to a perfectly readable decoding of what the code actually does!
I’ve scrolled through the datasheet, and put together a Python dictionary with a register address-name mapping. Using that, we can easily go through transactions, mapping them to specific register reads and writes, and convert the raw transaction data into lines of text that clearly tell us – first, we write this byte to SWITCHES0 register, then we write a this byte into POWER register, and so on. Here’s the code I wrote to make verbose transactions – and it helps you turn logic analyzer captures into Python code!
Say, you’re writing a replacement open-source firmware for something you own, or perhaps you’re poking around copying the implementation of some protocol for your own purposes, like I copied the Pinecil’s PD implementation to help me debug my own PD code. Here’s the cool part – you can translate this kind of output into your own high-level code near-instantly, to the point where you can even modify this decoding script to output Python or C code! This is just like decompiling, except you get a language of your choice, and a human-readable description of the code’s external behaviour, which is often what you actually want.
Here’s how a verbose transaction list looks: [34, ‘0x22’, 1, ‘0x01 ( DEV_ID)’, ‘rd’, [145], ‘0x91 (0b10010001)’]. And, this is how I can format such a transactions, using a helper function included in the code I’ve linked:

>>> tr_as_upy(transactions[0])
i2c.readfrom_mem(0x22, 0x1) # rd: DEV_ID 0x91 (0b10010001)
>>> tr_as_upy(transactions[1])
i2c.writeinto_mem(0x22, 0xc, b’x01′) # wr RESET: 0x01 (0b00000001)

Such code allows you to rapidly reverse-engineer proprietary and open-source devices, while getting a good grasp on what is it specifically that they do. What more, with such a decoder, you can also write a protocol decoder for Sigrok so that you can easily access it from Pulseview! For instance, if you’re capturing reads/writes for an I2C EEPROM, there’s an I2C EEPROM decoder in Sigrok that you can add – and, there’s never enough Sigrok decoders, so adding your own decoder to the pile is a wonderful contribution to the open-source logic analysis software that everybody knows and loves.
Going Further With Commandline
This decoding approach gives you the most control over your output data, which massively helps if you have to process large amounts of it. You can also debug intricate problems like never before. For instance, I’ve had to help someone debug a web-based ESP8266 flasher that can’t flash particular kinds of firmware images properly, and for that, I’m capturing the UART data being transferred between the PC and the ESP8266.
There’s a problem with such capturing, too – during flashing, the UART baudrate changes, with the bootloader baudrate being 76800, the flashing baudrate being 468000, and the software baudrate being 115200. As a result, you can’t pull off the usual trick where you connect a USB-UART adapter’s RX pin to your data bus and have it stream data to a serial terminal window on your monitor. Well, with granular control over how you process data captured by the logic analyzer, you don’t have to bother with that!
Bytes received at 76800 marked in orange, bytes received at 11500 marked in greed; the exact commandline visible in the screenshot, too!
The idea is – you connect a logic analyzer to the data bus, and stack two UART decoders onto the same pin! Each decoder is going to throw error messages whenever the current signal is on a different baudrate than the decoder’s expected one. Now, Sigrok being a reasonably modular and open-source project, you can absolutely write a UART decoder for Sigrok that works with multiple baudrates. If you’re like me and don’t want to do that, you can also go the lazy way about it and mash the output of two decoders together in realtime, using error messages as guidance on where the switch occured!
For this kind of purpose, having realtime and text-only processing of Sigrok-produced data is more than enough. Thankfully, the FX2 analyzers let you capture data indefinitely, and Sigrok commandline lets you stack protocol decoders that will then run in realtime! So, I’ve made a script that you can pipe sigrok-cli output into, which compares decoder output to figure out which baudrate is currently being used, and outputs data from the decoder with the least faults. The code’s missing a smarter buffering algo, so the switching-between-baudrates moment is a bit troublesome, as you can see in the screenshot, but it’s working otherwise!
With this Sigrok commandline approach, you gain one more logic analyzer superpower! Since FX2 analyzers let you capture data indefinitely, streaming it to your PC as it is captured, a commandline decoder lets you wire up a FX2 analyzer to a Pi Zero – so you can build a tiny device capturing and decoding a data bus 24/7. Set the FX2 and Pi Zero combo near whatever you’re trying to tap into, run sigrok, have it save data with timestamps onto an SD card, and you can collect weeks of bus activity data easily! This is the kind of capability I wish I had when I was tasked with reverse-engineering a special piece of industrial machinery, controlled over CAN and using a semi-proprietary communication algorithm; having lots of data seriously helps in such scenarios and I was struggling to capture enough.
If you’d rather keep to low-depth GUI experiments, this kind of parsing is useful too – Sigrok protocol decoders are written in Python, which means you can also take your Python output-parsing code and turn it into Pulseview-accessible protocol decoder reasonably easily. All in all, this kind of experimentation lets you squeeze as much as possible out of even the cheapest logic analyzers out there. In the next article, I’d like to go more in-depth through other kinds of logic analyzers we have available – especially all the the cheap options. Given that Sigrok has recently merged the PR with support for the Pi Pico, there’s a fair bit you can get beyond what the FX2-based analyzers have to offer! […]

Uncategorized

When Your Level Shifter Is Too Smart To Function

By now, 3.3V has become a comfortable and common logic level for basically anything you might be hacking. However, sometimes, you still need to interface your GPIOs with devices that are 5 V, 1.8 V, or something even less common like 2.5 V. At this point, you might stumble upon autosensing level shifters, like the TXB010x series Texas Instruments produces, and decide that they’re perfect — no need to worry about pin direction or bother with pullups. Just wire up your GPIOs and the two voltage rails you’re good to go. [Joshua0] warns us, however, that not everything is hunky dory in the automagic shifting world.
During board bring-up and multimeter probing, he found that the 1.8 V-shifted RESET signal went down to 1.0V — and its 3.3 V counterpart stayed at 2.6V. Was it a current fight between GPIOs? A faulty connection? Voltage rail instability? It got more confusing as the debugging session uncovered the shifting operating normally as soon as the test points involved were probed with the multimeter in a certain order. After re-reading the datasheet and spotting a note about reflection sensitivity, [Joshua0] realized he should try and probe the signals with a high-speed logic analyzer instead.

At a high enough frequency, he’s found the signals constantly oscillating back and forth, as the shifter’s autosensing mechanism was being fooled into switching by the signal reflections at a fast enough rate to confuse the multimeter into reading the signals as being at an in-between voltage. It turns out that even with signals that are meant to change only once in a board’s bootup, these shifters might give you more trouble than they’re worth. Not to worry, however, as you still have myriad ways to level shift any signal you want. […]

Uncategorized

Ultimate Power: Lithium-Ion Packs Need Some Extra Circuitry

A LiIon pack might just be exactly what you need for powering a device of yours. Whether it’s a laptop, or a robot, or a custom e-scooter, a CPAP machine, there’s likely a LiIon cell configuration that would work perfectly for your needs. Last time, we talked quite a bit about the parameters you should know about when working with existing LiIon packs or building a new one – configurations, voltage notations, capacity and internal resistance, and things to watch out for if you’re just itching to put some cells together.
Now, you might be at the edge your seat, wondering what kind of configuration do you need? What target voltage would be best for your task? What’s the physical arrangement of the pack that you can afford? What are the safety considerations? And, given those, what kind of electronics do you need?
Picking The Pack Configuration
Pack configurations are well described by XsYp:X serial stages, each stage having Y cells in parallel. It’s important that every stage is the same as all the others in as many parameters as possible – unbalanced stages will bring you trouble.
To get the pack’s nominal voltage, you multiply X (number of stages) by 3.7 V, because this is where your pack will spend most of its time. For example, a 3s pack will have 11.1 V nominal voltage. Check your cell’s datasheet – it tends to have all sorts of nice graphs, so you can calculate the nominal voltage more exactly for the kind of current you’d expect to draw. For instance, the specific cells I use in a device of mine, will spend most of their time at 3.5 V, so I need to adjust my voltage expectations to 10.5 V accordingly if I’m to stack a few of them together.
Now, where do you want to fit your pack? This will determine the voltage. If you want to quickly power a device that expects 12 V, the 10.5 V to 11.1 V of a 3s config should work wonders. If your device detects undervoltage at 10.5V, however, you might want to consider adding one more stage.
How much current do you want to draw? For the cells you are using, open their spec sheet yet again, take the max current draw per cell, derate it by like 50%, and see how many cells you need to add to match your current draw. Then, add parallel cells as needed to get the capacity you desire and fit the physical footprint you’re aiming for.
The last word for section this is on safety. When working with packs that exceed 20V , you are at a higher risk of injury than with the usual low-voltage electronics. DC doesn’t shock you the way AC does, it makes your muscles contract in a way that risks you holding onto whatever is shocking you, and, it also can fry your muscles from the inside. If you’re working with ebike packs, you should seriously heed the advice of having one hand in your pocket as much as humanly possible.
Not only that, but working with LiIon packs is physically dangerous even if you don’t get shocked. At a certain pack voltage and capacity, a short-circuit can blind you, and it will easily melt your metal tools – use plastic as much as possible. If it can drive your ebike motor, it contains enough energy to do a lot of damage.
Thankfully, in the end, safety is manageable. Plus, there’s electronics that help you take care of it.
Electronics And Balancing
In terms of circuitry, you need three things for a LiIon pack – charging, protection, and balancing. So, let’s start with balancing, because balancing circuits give you some much-needed insights into LiIon pack charging requirements.
Balancing is seriously required for LiIon packs that have more than one cell in series. A LiIon charger chip by definition doesn’t monitor individual stages in the series. Over time, the stages in any pack will develop mismatches in internal resistance and capacity, if they’re not noticeably present by the moment the pack is built.  During charging, this means that some of the stages might get fully charged while other stages haven’t finished charging yet – and a LiIon charger that’s only connected to the single series ground and output can’t detect that, so it will continue pumping current into the pack, which might lead to overcharging.
Balancers are circuits you build into the LiIon pack directly, in parallel with every stage, that contain a hefty resistor and can shunt the entire charging current away from a cell stage when the voltage on the stage exceeds the maximum voltage threshold. As the pack is charging, the balancers will turn on one by one, protecting the stages with lowest capacity from overcharge. Balancers are simple: you can build one with a resistor, TL431, and a random FET from a desktop motherboard. Of course, there is a balancer available on the market for nearly every battery configuration too.
You have to add a balancer if you want your pack to be safe, doubly so with high stage count packs where even miniscule differences will soon be exacerbated. And triply so if you’re making your pack out of salvaged batteries, which may have mismatches that series charging would exacerbate.
If you’re looking for a protection circuit that does balancing for you, look out for rows of SMD resistors on the board. Balancers are not a substitute for a badly built pack – they can’t dissipate all that much current; ultimately, they are safeguards that help keep a good pack from going bad, and you need them just as much as you need both the charging and protection circuits.
Charging And Protection
Once you move to a multi-stage pack, the classic TP4056 single-stage chargers will not work anymore, as much as I love them. Instead, get a charger chip that can handle series configurations – you can find them reasonably easily. For low-count-series and high-count-series packs, there are plenty of charging circuits on Aliexpress. For low-count-series packs, these circuits take the form of small modules, TP4056-like, just explicitly being marked “2s” or “8.4 V”. 
Best is if everything is under your control. You must have control over charger’s configured termination voltage, or at least know what voltage it’s set to – too high and you overcharge, which is pretty bad; too low and you undercharge, which is actually beneficial for pack health but will result in you losing out on capacity. You will likely want control over charging current, too – just like with voltage, it can be too small, but it can’t be too large.
As with the single-cell setups, there are also multi-series protection circuits that safeguard you from overcurrent and overdischarge. These aren’t meant as a substitute for proper handling, but they can protect you from accidents like short-circuits on the output or a charger going haywire; not that there can’t be a spark, but the spark’s consequences will be greatly diminished.
It’s even more convenient when the balancing circuit and overdischarge are included in one PCB. These battery management system boards (BMS) are plentiful on the market and take care of everything under one roof. Look for large SMD resistors as a good indicator that the circuit has balancing as well as overcurrent protection. These boards often lack programmable overcurrent thresholds, though, so be ready to find part numbers for extra FETs and solder them onto conveniently unpopulated footprints if you want to up the limit.
These are the basics you need to work with a multi-stage pack. Is anything unclear? Ask in the comments below – this is the kind of topic where misunderstandings are worth getting corrected early. […]

Uncategorized

A Simple Line Injector Shows You The Wonderful World Of PSRR

[limpkin] writes us to show a line injector they’ve designed. The principle is simple — if you want to measure how much PSU noise any of your electronic devices let through, known as PSRR (Power Supply Rejection Ratio), you can induce PSU noise with this board, and then measure noise on your device’s output. The board is likewise simple. A few connectors, resistors, and caps, and a single N-FET!
You do need a VNA, but once you have that, you get a chance to peek into an entire world of insights. Does that 1117 LDO actually filter out noise better than a buck regulator? Is it enough to use a Pi filter for that STM32’s ADC rail, and do the actual parts you’re using actually help with that task? How much noise does your device actually let through in the real world, after being assembled with the specific components you’ve picked? [limpkin] shows us a whole bunch of examples – putting regulators, filters and amplifiers to the test, and showing us how there’s more than meets the eye.
Everything is open source, with full files available on the blog. And, if you want it pre-assembled, tested and equipped with the CNC-milled case, you can get it on Tindie or Lektronz! Of course, even without a tool like this, you can still get good filter designs done with help of computer-aided modelling.
We thank [alfonso] for sharing this with us! […]

Uncategorized

Extenders And Translators For Your I2C Toolkit

If you’ve ever been laying out a network I2C devices inside a project box or throughout your robot’s body, you’ll probably know that I2C is not without its pitfalls. But for many of those pitfalls, there’s a handy chip you can use. [Roman Dvořák] from ThunderFly has experienced it on their drone building journeys, and that’s why they bring us two wonderful open source hardware boards: an I2C bus extender, and an I2C address translator.
The first board, an I2C bus extender, is based around the TCA4307 chip, and not only it lets you extend the bus further than it would normally go, it would also protect you. When the bus capacity is no longer handleable by your devices, or a particular misbehaving device gets the bus stuck, this chip will take care of it and dissipate your troubles. It will even let you know when your bus is wired up correctly, with a handy shine-through LED!
The second board is an I2C address translator. We’ve covered them before, but in short, address translators let you avoid I2C address conflicts while using multiple devices that share the same address. This particular module uses the LTC4317 chip, a common choice for such translation, and the board leaves no feature unimplemented. In the README, there’s quite a few pictures with examples of where this sensor proves mighty useful, too!
It appears that ThunderFly open sources a lot of their designs on GitHub, an effort that we salute. The designs are great to learn from, but if you’re just looking for turn-key hardware, you can get both of these boards from their Tindie store. The cables they use have locking connectors, but as long as the pinout matches, you should be able to solder a JST-SH socket and add these modules to your QWIIC toolkit. […]

Uncategorized

Finally Taming Thunderbolt With Third-Party Chips

Thunderbolt has always been a functionally proprietary technology, held secret by Intel until “opening” the standard in a way that evidently wasn’t enough for anyone to meaningfully join in. At least, until last year, when we saw announcements about ASMedia developing two chips for Thunderbolt use. Now, we are starting to see open source, letting us tinker with PCIe at prices lower than $100 per endpoint.
In particular, this board from Reddit uses the ASM2464PD — a chipset that supports TB3/4/USB4, and gives you a 4x PCIe link. Harnessing the 40 Gbps power to wire up an NVMe SSD, this board shows us it’s very much possible to design a fully functional ASM2464PD board while going only off whatever scarce data is available to the public. With minimal footprint that barely extends beyond the 2230 SSD it’s designed for, curved trace layout, and a CNC-milled case, this board sets a high standard for a DIY Thunderbolt implementation.
The main problem is that this project is not open-source – all we get is pretty pictures and a bit of technical info. Thankfully, we’ve also seen [WifiCable] take up the mantle of making this chip actually hobbyist-available – she’s created a symbol, fit a footprint, and made an example board in KiCad retracing [Picomicro]’s steps in a friendly fashion. The board is currently incomplete because it needs someone to buy an ASM2464PD enclosure on Aliexpress and reverse-engineer the missing circuitry, but if open-source Thunderbolt devices are on your wish list, this is as close as you get today – maybe you’ll be able to make an eGPU adapter, even. In the meantime, if you don’t want to develop hardware but want to take advantage of Thunderbolt, you can build 10 Gbps point-to-point networks. […]