Integration Taught Correctly

[Math the World] claims that your calculus teacher taught you integration wrong. That’s assuming, of course, you learned integration at all, and if you haven’t forgotten it. The premise is that most people think of performing an integral as finding the area under a curve or as the “antiderivative.” However, fewer people think of integration as adding up many small parts. The video asserts that studies show that students who don’t understand the third definition have difficulty applying integration to real-world problems.
We aren’t sure that’s true. People who write software have probably looked at numerical integration like Simpson’s rule or the midpoint rule. That makes it pretty obvious that integration is summing up small bits of something. However, you usually learn that very early, so you’re forgiven if you didn’t get the significance of it at the time.

Even if you didn’t learn calculus, the video is an easy introduction to the idea of integration with practical examples drawn from basketball, archery, and more. Although there is a bit of calculus terminology, the actual problems could just as easily have been the voltage on a charging capacitor, for example.
We think calculus has a bad rap as being difficult when it isn’t. Maybe you should take more than 20 minutes to learn it.
[embedded content] […]


Radioactive 3D Printed Flower Glows and Glows

Glow-in-the-dark projects aren’t that uncommon. You can even get glow-in-the-dark PLA filament. However, those common glowing items require a charge from light, and the glow fades very quickly. [Ogrinz Labs] wasn’t satisfied with that. His “Night Blossom” 3D-printed flower glows using radioactive tritium and will continue to glow for decades.
Tritium vials are available and often show up in watches for nighttime visibility. The glow doesn’t actually come directly from the radioactive tritium (an isotope of hydrogen). Instead, the radioactive particles excite phosphorus, which glows in the visible spectrum.
Once you have the vials, it is easy to understand how to finish off the project. The flower contains some long tubes inside each petal. There are also a few tiny vials in the center. The whole assembly goes together with glue.

Tritium tubes are widely available. There are, however, fake tubes, so if you get a good deal, you might want to make sure you are really getting a tritium tube. One way to tell is that fake tubes will glow brighter when you briefly expose them to a bright light. A real tritium tube won’t care if you hit it with light or not. Fake tubes are often cheap, and real tubes are not. Also, because tritium is radioactive, there may be laws or regulations about buying, selling, and possessing them, depending on where you live. The truth is, these little tubes have tiny amounts of material, but if you break one, you probably shouldn’t sniff the contents.
There were at least two versions. The first was FDM printed in clear plastic. Resin and pigment added color, and a clearcoat sealed it all in. The second version uses resin printing along with pigments. The FDM part diffused the light a bit, which might benefit this application.
If you don’t need much power, you can use these vials to make a simple nuclear battery. Afraid of radioactivity? Well, that’s generally a good idea, but in this case, you are probably fine.
[embedded content] […]


Ask Hackaday: What If You Did Have a Room Temperature Superconductor?

The news doesn’t go long without some kind of superconductor announcement these days. Unfortunately, these come in several categories: materials that require warmer temperatures than previous materials but still require cryogenic cooling, materials that require very high pressures, or materials that, on closer examination, aren’t really superconductors. But it is clear the holy grail is a superconducting material that works at reasonable temperatures in ambient temperature. Most people call that a room-temperature superconductor, but the reality is you really want an “ordinary temperature and pressure superconductor,” but that’s a mouthful.
In the Hackaday bunker, we’ve been kicking around what we will do when the day comes that someone nails it. It isn’t like we have a bunch of unfinished projects that we need superconductors to complete. Other than making it easier to float magnets, what are we going to do with a room-temperature superconductor?
We draw schematics as though wires have no resistance. But in real life, that’s not true. Electrons flowing through a wire will cause some loss. However, in 1911, a Dutch physicist, Heike Kamerlingh Onnes, pioneered low-temperature research. At the time, common wisdom observed that while lowering a metal’s temperature reduced resistance, it was likely that at absolute zero, electrons would be immobile and, thus, no electrical current would flow at that temperature. Onnes, observed quite the opposite. Starting with mercury, he observed that at 4.2 K, very near absolute zero, the resistivity of the material abruptly went to zero.
Of course, getting materials near 4.2 K is a big problem. For example, liquid nitrogen — which is usually used in labs when you want something cold — boils at 77 K. Even then, cooling things with liquid nitrogen isn’t very practical for most applications. However, there are some ceramic materials that exhibit superconductivity above 90 K so it is possible to use superconductors today if you are willing to cool with something like liquid nitrogen.
Superconductors don’t exhibit electrical loss, so a current can travel forever in a loop of superconducting material. Experiments have observed currents traveling in a loop for nearly three decades with no measurable loss, and the ories predict currents would sustain at least 100,000 years if not more than the lifetime of the universe.
The physics behind it all is hairy. In normal conductors, electrons flow across an ionic lattice. Some electrons collide with the ions, converting some of their energy to heat. In a superconductor, the electrons bind in weak pairs known as Cooper pairs. The pairs form a type of superfluid that can flow without energy dissipation. You can see a more detailed explainer in the video below.
[embedded content]
One important takeaway about superconductivity is that it disappears above given current and magnetic field levels. So in addition to characterizing superconductors by their critical temperature and pressure, it’s also important to know the critical current density and critical magnetic field strengths.
Obvious Cases
There are several places where superconductors are used today: SQUID (superconducting quantum interference devices) are very sensitive magnetometers that use Josephson junctions, superconductors with a thin insulating component. These are common in labs, MRI machines, and quantum computers. It is possible to use them to locate submarines, too. They do not need to pass large currents and are not subject to strong fields. Presumably, if you had room-temperature superconductors, you could form Josephson junctions with them, and all of these devices would become less expensive and easier to operate.
Another place we see superconductors already is in electromagnets for things like MRIs, particle accelerators, levitating trains, and fusion reactors. These are the applications that require high current or are subject to strong magnetic fields. Today, these applications all require liquid nitrogen or liquid helium. If future room-temperature superconductors end up having high critical current densities as well, you could cheaply build very strong electromagnets.
Certainly, places where we use cold superconductors today would just get better. But there are also several new applications that you could do today but the cooling overhead is too prohibitive. Of course, some of it will depend on the characteristics of the unknown magic material. For example, you often hear people say that electrical transmission lines could be superconductors. That’s true, but only if they have high critical magnetic field parameters, because otherwise they don’t really work for AC current. On the other hand, we use AC partly as a hedge against losses, so if you were willing to change the whole system, you could possibly use superconducting cables to transmit lower DC voltages long distances, but then you’re relying on a high critical current density.
Consumer Electronics
We aren’t entirely certain what superconductors will do for consumer electronics. Better magnets might mean better motors, so maybe your electric drill will be lighter and more powerful. Lower resistance in components could mean less heat loss and higher battery life. You often hear that superconductors will lead to phones that last weeks on a charge. Maybe, but our guess is not right away. We doubt that the loss in interconnect is really what’s draining your phone battery. However, it is true that components that have fewer inefficiencies could lead to longer battery life. It might allow faster charging, too. After all, GaN charging is more efficient because it produces less heat than conventional electronics. A superconducting charger would be even faster.
In general, you could expect warm superconducting electronics to be able to handle more current in smaller spaces. There is some thought they may also be faster. Eary Josephson junctions (admittedly, in liquid helium) were much faster than conventional transistors in use at the time. Of course, transistors are better today, but presumably widespread use of superconducting junctions would also bring improvements.
What Will You Do?
The truth is, though, since we don’t know the properties of the room-temperature superconductor, we don’t know what it may or may not bring. Maybe you won’t have a superconducting cell phone because it would reset itself whenever you encountered a magnetic field. We simply don’t know.
However, we did want to ask. If you could open your web browser and order superconducting parts right now, what would you do with them? Do you want wire? Coils? Switching devices? And why? Let us know in the comments below.
If you have access to liquid nitrogen, maybe you are already using superconducting material. If so, let us know that, too. Or, perhaps you are working on making the next material to claim room-temperature superconductivity.
Featured Image: The eight toroidal superconducting magnets at the heart of the LHC, credit: CERN. […]


What is x86-64-v3?

You may have heard Linux pundits discussing x86-64-v3. Can recompiling Linux code to use this bring benefits? To answer that question, you probably need to know what x86-64-v3 is, and [Gary Explains]… well… explains it in a recent video.
If you’d rather digest text, RedHat has a recent article about their experiments using the instructions set in RHEL10. From that article, you can see that most of the new instructions support some enhancements for vectors and bit manipulation. It also allows for more flexible instructions that leave their results in an explicit destination register instead of one of the operand registers.
Of course, none of this matters for high-level code unless the compiler supports it. However, gcc version 12 will automatically vectorize code when using the -O2 optimization flags.

There’s a snag of course, that will make code incompatible with older CPUs. How old? Intel has supported these instructions since 2013 in the Haswell CPUs. Although some Atom CPUs have had v3 since 2021, some later Intel Atoms do not support it fully. AMD came to the party in 2015. There is a newer set of instructions, x86-64-v4. However, this is still too new, so most people, including RedHat, plan to support v3 for now. You can find a succinct summary table on Wikipedia.
So, outside of Atom processors, you must have some old hardware to not have the v3 instructions. Some of these instructions are pretty pervasive, so switching at run time doesn’t seem very feasible.
We wonder if older processors would trip illegal instruction interrupts for these instructions. If so, you could add emulated versions the same way old CPUs used to emulate math coprocessors if they didn’t have one.
Keep in mind that the debate about dropping versions before x86-64-v3 doesn’t mean Linux itself will care. This is simply how the distributions do their compile. While compiling everything yourself is possible but daunting, there will doubtlessly be distributions that elect to maintain support for older CPUs for as long as the Linux kernel will allow it.
Intel would like to drop older non-64-bit hardware from CPUs. If you want to sharpen up your 64-bit assembly language skills, try a GUI.
[embedded content]
(Title image from Wikipedia) […]


Tetris Goes Round and Round

You’ve probably played some version of Tetris, but [the Center for Creative Learning] has a different take on it. Their latest version features a cylindrical playing field. While it wouldn’t be simple to wire up all those LEDs, it is a little easier, thanks to LED strips. You can find the code for the game on GitHub.
In all, there are 5 LED strips for a display and 13 strips for the playing area, although you can adjust this as long as there are at least 10 rows. The exact number of LEDs will depend on the diameter of the PVC pipe you build it on.

Using a PS2 controller, the games allow you to play a full-cylinder or in a half-cylinder mode. We were hoping they’d have put up a video showing the gameplay, but we couldn’t find it.
We couldn’t help but think that this would make an excellent display for many purposes. You might even be able to design different games for it.
We’ve seen full-circle Tetris, but it is hardly the same idea. If you want just plain Tetris, you could break out your transistor tester. […]


Your Scope, Armed and Ready

[VoltLog] never has enough space on his bench. We know the feeling and liked his idea of mounting his oscilloscope on an articulated arm. This is easy now because many new scopes have VESA mounts like monitors or TVs. However, watching the video below, we discovered there was a bit more to it than you might imagine.
First, there are many choices of arms. [VoltLog] went for a cheap one with springs that didn’t have a lot of motion range. You may want something different. But we didn’t realize that many of these arms have a minimum weight requirement, and modern scopes may be too light for some of these arms. Most arms require at least 2 kg of weight to balance the tensions in their springs or hydraulics. Of course, you could add a little weight to the mounting plate of the arm if you needed it. The only downside we see is that it makes it hard to remove the scope if you want to use it somewhere else.
Assuming you have a mount you like, the rest is easy. Of course, your scope might not have VESA mounting holes. No problem. You can probably find a 3D printed design for an adapter or make (or adapt) your own. You might want to print a cable holder at the same time.
Honestly, we’ve thought of mounting a scope to the wall, but this seems nicer. We might still think about 3D printing some kind of adapter that would let you easily remove the scope without tools.
Of course, there is another obvious place to mount your scope. Monitor arms can also mount microscopes.

[embedded content] […]


Mirror, Mirror, Electron Mirror…

If you look into an electron mirror, you don’t expect to see your reflection. As [Anthony Francis-Jones] points out, what you do see is hard to explain. The key to an electron mirror is that the electric and magnetic fields are 90 degrees apart, and the electrons are 90 degrees from both.
You need a few strange items to make it all work, including an electron gun with a scintillating screen in a low-pressure tube. Once he sets an electric field going, the blue line representing the electrons goes from straight to curved.

The final addition is the magnetic field. A pair of coils do the job. When activated, the magnets deflect the electrons down as opposed to the electric field, which deflects them upwards. The mirror effect is caused when electrons under both forces move downward, seem to strike some invisible mirror, and then move upward again.
Why does it work? [Anthony] explains it very well at the end of the video. If you want to see what the big labs are doing, try trapping electrons. We’ve seen CRTs that use magnetic deflection (usually TVs) and electrostatic deflection (usually oscilloscopes). Other than the screen being the wrong way, it seems like you could do this with a CRT. Those tubes had a long run but are getting harder to find every year.
[embedded content] […]


Ask Hackaday: What’s in Your Garage?

No matter what your hack of choice is, most of us harbor a secret fantasy that one day, we will create something world-changing, right? For most of us, that isn’t likely, but it does happen. A recent post from [Rohit Krishnan] points out that a lot of innovation happens in garages by people who are more or less like us.
He points out that Apple, Google, and HP all started in garages. So did Harley Davidson. While it wasn’t technically a garage, the Wright brothers were in a bicycle workshop, which is sort of a garage for bikes. Even Philo Farnsworth started out in a garage. Of course, all of those were a few years ago, too. Is it too late to change the world from your workbench?
We’d argue basements are at least as important (although in southern Texas, they call garages Lone Star basements since no one has proper basements). The real point of the article, though, isn’t the power of the garage. Rather, it is the common drive and spirit of innovators to do whatever it takes to make their vision a reality. A few hundred bucks and an oddball space has given birth to many innovations.

So, what’s in your garage? Or where do you hack? And do you think innovation at that scale is still possible today? When all you needed to build a product that would launch HP was a few soldering irons and hand tools, it was a bit easier slope than standing up a semiconductor fab line.
Easier, Yet Harder?
Then again, some things are easier. Getting a PCB made and stuffed is orders of magnitude easier than it was two decades ago. Prototyping is trivial with 3D printing and CNC machining. Fielding a computer-based application that can scale to millions of users is cheaper and easier than ever, too. So, where are the garage innovators today? Are people no longer willing to work in a garage for little pay, hoping that it will pay off?
And of course, it doesn’t always pay off. You just hear about the ones that do. For every garage band that becomes Nirvana, The Ramones, or Creedence Clearwater Revival, there are probably hundreds like the Fugitive Five you probably haven’t heard of and hundreds more that you absolutely have never heard of. Even Walt Disney (who started in a garage, according to the post) went bankrupt at least once before hitting it big. As investors will attest, you can’t tell who will succeed until they do.
Get Innovating
For a while, big labs were the ones creating innovation, but that’s changed a lot in recent years. Small inventors disrupting the status quo isn’t a new phenomenon. We’d like to see more of it today.
We’re proud to see garage-scale innovation basically every day. Maybe the days when you could start Apple in your garage are gone. Certainly, you can’t actually launch a new personal computer like they did. But will garage innovators play a part in alternate energy, AI, or another nascent field? We hope so. Maybe you’ll be one of them.
Title image courtesy of [Cottonbro Studio] […]


Hackaday Podcast Episode 259: Twin-T, Three-D, and Driving to a Tee

Hackaday Editors Elliot Williams and Al Williams sat down to compare notes on their favorite Hackaday posts of the week. You can listen in on this week’s podcast. The guys talked about the latest Hackaday contest and plans for Hackaday Europe. Plus, there’s a what’s that sound to try. Your guess can’t be worse than Al’s, so take a shot. You could win a limited-edition T-shirt.
In technical articles, Elliot spent the week reading about brushless motor design, twin-t oscillators, and a truly wondrous hack to reverse map a Nintendo Switch PCB. Al was more nostalgic, looking at the 555 and an old Radio Shack kit renewed. He also talked about a method to use SQL to retrieve information from Web APIs.
Quick hacks were a decided mix with everything from homemade potentiometers to waterproof 3D printing. Finally, the guys talked about Hackaday originals. Why don’t we teach teens to drive with simulators? And why would you want to run CP/M — the decades-old operating system — under Linux?

Download the file suitable for listening, burning on CDs, or pressing on vinyl.

Episode 259 Show Notes:

What’s that Sound?

Know what it is? Take your shot, and you might win a Hackaday Podcast T-shirt.

Interesting Hacks of the Week:

Quick Hacks:

Elliot’s Picks:

Al’s Picks:

Can’t-Miss Articles: […]


Filters are in Bloom

If you are a fan of set theory, you might agree there are two sets of people who write computer programs: those who know what a Bloom filter is and those who don’t. How could you efficiently test to see if someone is one set or another? Well, you could use a Bloom filter.  [SamWho] takes us through the whole thing in general terms that you could apply in any situation.
The Bloom filter does perform a trade-off for its speed. It is subject to false positives but not false negatives. That is, if a Bloom filter algorithm tells you that X is not part of a set, it is correct. But if it tells you it is, you may have to investigate more to see if that’s true.
If it can’t tell you that something is definitely in a set, why bother? Usually, when you use a Bloom filter, you want to reduce searching through a huge amount of data. The example in the post talks about having a 20-megabyte database of “bad” URLs. You want to warn users if they enter one, but downloading that database is prohibitive. But a Bloom filter could be as small as 1.8 megabytes. However, there would be a 1 in 1000 chance of a false positive.
Increase the database size to 3.59 megabytes, and you can reduce false positives to one in a million. Presumably, if you got a positive, you could accept the risk it is false, or you could do more work to search further.
Imagine, for example, a web cache device or program. Many web pages are loaded one time and never again. If you cache all of them, you’ll waste a lot of time and push other things out of the cache. But if you test a page URL with a Bloom filter, you can improve things quite a bit. If the URL may exist in the Bloom filter, then you’ve probably seen it before, so you might want to cache it.
If it says you haven’t, you can add it to the filter so if it is ever accessed again, it will cache. Sure, sometimes a page will show a false positive. So what? You’ll just cache the page on the first time, which is what you did before, anyway. If that happens only 0.1% of the time, you still win.
In simple terms, the Bloom filter hashes each item using three different algorithms and sets bits in an array based on the result. To test an item, you compute the same hashes and see if any of the corresponding bits are set to zero. If so, the item can’t be in the set. Of course, there’s no assurance that all three bits being set means the set contains the item. Those three bits might be set for totally different items.
Why does increasing the number of bits help? The post answers that and looks at other optimizations like a different number of hash functions and counting.
The post does a great job of explaining the filter but if you want a more concrete example in C, you might want to read this post next. Or search for code in your favorite language. We’ve talked about Python string handling with Bloom filters before. We’ve even seen a proposal to add them to the transit bus. […]