No Picture
Uncategorized

Series Hybrid Semi-Trucks: It Works for Locomotives So Why Not?

The current Edison Motors semi-truck prototype. (Credit: Edison Motors)
Canadian start-up Edison Motors may not seem like much at first glance — consisting of fewer than two dozen people in a large tent — but their idea of bringing series hybrid technology to semi-trucks may just have wheels. The concept and Edison Motors’ progress is explained in a recent video by The Drive on Youtube, starting off with the point that diesel-electric technology is an obvious fit for large trucks like this. After all, it works for trains.
In a series hybrid, there are two motors: a diesel generator and an electric motor (diesel-electric). This was first used in ships in the 1900s and would see increasing use in railway locomotives starting in the early 20th century. In the case of Edison Motors’ current prototype design there is a 9.0 liter Scania diesel engine which is used solely as a generator at a fixed RPM. This is a smaller engine than the ~15 liter engine in a conventional configuration and also doesn’t need a gearbox.
Compared to a battery-electric semi-truck, like the Tesla Semi, it weighs far less. And unlike a hydrogen-fuel cell semi-truck it actually exists and doesn’t require new technologies to be invented. Instead a relatively small battery is kept charged by the diesel generator and power fed back into the battery from regenerative braking. This increases efficiency in many ways, especially in start-stop traffic, while not suffering a weight penalty from a heavy battery pack and being able to use existing service stations, and jerry cans of diesel.
In addition to full semi-trucks Edison Motors also works on conversion kits for existing semi-trucks, pick-up trucks and more. Considering how much of the North American rolling stock  on its rail systems is diesel-electric, it’s more amazing that it would have taken so long for the same shift to series hybrid on its road. Even locomotives occasionally used direct-drive diesel, but the benefits of diesel-electric hybrids quickly made that approach obsolete.

[embedded content] […]

Uncategorized

Using Antimony To Make Qubits More Stable

One of the problems with quantum bits, or “qubits”, is that they tend to be rather fragile, with a high sensitivity to external influences. Much of this is due to the atoms used for qubits having two distinct spin states of up or down, along with the superposition. Any disturbing of the qubit’s state can cause it to flip between either spin, erasing the original state. Now antimony is suggested as a better qubit atom by researchers at the University of New South Wales in Australia due to it having effectively eight spin states, as also detailed in the university press release along with a very tortured ‘cats have nine lives’ analogy.
For the experiment, also published in Nature Physics, the researchers doped a silicon semiconductor with a single antimony atom, proving that such an antimony qubit device can be manufactured, with the process scalable to arrays of such qubits. For the constructed device, the spin state is controlled via a transistor constructed on top of the trapped atom. As a next step a device with closely spaced antimony atoms will be produced, which should enable these to cooperate as qubits and perform calculations.
By having the qubit go through many more states to fully flip, these qubits can potentially be much more stable than contemporary qubits. That said, there’s still a lot more research and development to be done before a quantum processor based this technology can go toe-to-toe with a Commodore 64 to show off the Quantum Processor Advantage. Very likely we’ll be seeing more of IBM’s hybrid classical-quantum systems before that. […]

Uncategorized

Curious Claim of Conversion of Aluminium into Transparent Aluminium Oxide

Sometimes you come across a purported scientific paper that makes you do a triple-check, just to be sure that you didn’t overlook something, as maybe the claims do make sense after all. Such is the case with a recent publication in the Langmuir journal by [Budlayan] and colleagues titled Droplet-Scale Conversion of Aluminum into Transparent Aluminum Oxide by Low-Voltage Anodization in an Electrowetting System.
Breaking down the claims made and putting them alongside the PR piece on the [Ateneo De Manila] university site, we start off with a material called ‘transparent aluminium oxide’ (TAlOx), which only brings to mind aluminium oxynitride, a material which we have covered previously. Aluminium oxynitride is a ceramic consisting of aluminium, oxygen and nitrogen that’s created in a rather elaborate process with high pressures.
In the paper, however, we are talking about a localized conversion of regular aluminium metal into ‘transparent aluminium oxide’ under the influence of the anodization process. The electrowetting element simply means overcoming the surface tension of the liquid acid and does not otherwise matter. Effectively this process would create local spots of more aluminium oxide, which is… probably good for something?
Combined with the rather suspicious artefacts in the summary image raising so many red flags that rather than the ‘cool breakthrough’ folder we’ll be filing this one under ‘spat out by ChatGPT’ instead, not unlike a certain rat-centric paper that made the rounds about a year ago. […]

Uncategorized

How To Find Where a Wire in a Cable is Broken

Determining that a cable has a broken conductor is the easy part, but where exactly is the break? In a recent video, [Richard] over at the Learn Electronics Repair channel on YouTube gave two community-suggested methods a shake to track down a break in a proprietary charging cable. The first attempt was to run a mains power detector along the cable to find the spot, but he didn’t have much luck with that.
The second method involved using the capacitance of the wires, or specifically treating two wires in the cable as the electrodes of a capacitor. Since the broken conductor will be shorter, it will have less capacitance, with the ratio theoretically allowing for the location of the break in the wire to be determined.
In the charging cable a single conductor was busted, so its capacitance was compared from both sides of the break and compared to the capacitance of two intact conductors. The capacitance isn’t much, on the order of dozens to hundreds of picofarads, but it’s enough to make an educated guess of where the rough location is. In this particular case the break was determined to be near the proprietary plug, which ruled out a repair as the owner is a commercial rental shop of e-bikes.
To verify this capacitor method, [Richard] then did it again on a piece of mains wire with a deliberate cut to a conductor. This suggested that it’s not a super accurate technique as applied, but ‘good enough’. With a deeper understanding of the underlying physics it likely can be significantly more accurate, and it’s hardly the only way to find broken conductors, as commentators to the video rightly added.

[embedded content]
Thanks to [Jim] for the tip. […]

Uncategorized

Most Energetic Cosmic Neutrino Ever Observed by KM3NeT Deep Sea Telescope

On February 13th of 2023, ARCA of the kilometre cubic neutrino telescope (KM3NeT) detected a neutrino with an estimated energy of about 220 PeV. This event, called KM3-230213A, is the most energetic neutrino ever observed. Although extremely abundant in the universe, neutrinos only weakly interact with matter and thus capturing such an event requires very large detectors. Details on this event were published in Nature.
Much like other types of telescopes, KM3NeT uses neutrinos to infer information about remote objects and events in the Universe, ranging from our Sun to other solar systems and galaxies. Due to the weak interaction of neutrinos they cannot be observed like photons, but only indirectly via e.g. photomultipliers that detect the blue-ish light of Cherenkov radiation when the neutrino interacts with a dense medium, such as the deep sea water in the case of ARCA (Astroparticle Research with Cosmics in the Abyss). This particular detector is located at a depth of 3,450 meters off the coast of Sicily with 700 meter tall detection units (DUs) placed 100 meters apart which consist out of many individual spheres filled with detectors and supporting equipment.
With just one of these high-power neutrinos detected it’s hard to say exactly where or what it originated from, but with each additional capture we’ll get a clearer picture. For a fairly new neutrino telescope project it’s also a promising start especially since the project as a whole is still under construction, with additional detectors being installed off the coasts of France and Greece. […]

Uncategorized

3DBenchy Sets Sail into the Public Domain

Good news for everyone who cannot get enough from improbably shaped boats that get referred to as a bench: the current owner (NTI Group) of the copyright has announced that 3DBenchy has been released into the public domain. This comes not too long after Prusa’s Printables website had begun to purge all derived models to adhere to the ‘no derivatives’ license. According to NTI, the removal of these derived models was not requested by NTI, but by a third-party report, unbeknownst to NTI or the original creator of the model. Recognizing its importance to the community, 3DBenchy can now be downloaded & modified freely.
NTI worked together with the original creator [Daniel Norée] and former Creative Tools CEO [Paulo Kiefe] to transition 3DBenchy and the associated website to the public domain, with the latter two having control over the website and associated social media accounts. Hopefully this means that the purged models on Printables can be restored soon, even if some may prefer to print alternate (literal) benches.
The unfortunate part is that much of this mess began courtesy of the original 3DBenchy license being ignored. If that point had been addressed many years ago instead of being swept under the rug by all parties involved, there would have been no need for any of this kerfuffle. […]

No Picture
Uncategorized

Why AI Usage May Degrade Human Cognition and Blunt Critical Thinking Skills

Any statement regarding the potential benefits and/or hazards of AI tends to be automatically very divisive and controversial as the world tries to figure out what the technology means to them, and how to make the most money off it in the process. Either meaning Artificial Inference or Artificial Intelligence depending on who you ask, AI has seen itself used mostly as a way to ‘assist’ people. Whether in the form of a chat client to answer casual questions, or to generate articles, images and code, its proponents claim that it’ll make workers more efficient and remove tedium.
In a recent paper published by researchers at Microsoft and Carnegie Mellon University (CMU) the findings from a survey are however that the effect is mostly negative. The general conclusion is that by forcing people to rely on external tools for basic tasks, they become less capable and prepared of doing such things themselves, should the need arise. A related example is provided by Emanuel Maiberg in his commentary on this study when he notes how simple things like memorizing phone numbers and routes within a city are deemed irrelevant, but what if you end up without a working smartphone?
Does so-called generative AI (GAI) turn workers into monkeys who mindlessly regurgitate whatever falls out of the Magic Machine, or is there true potential for removing tedium and increasing productivity?

The Survey
In this survey, 319 knowledge workers were asked about how they use GAI in their job and how they perceive GAI usage. They were asked how they evaluate the output from tools like ChatGPT and DALL-E, as well as how confident they were about completing these same tasks without GAI. Specifically there were two research questions:

When and how do knowledge workers know that they are performing critical thinking when using GAI?
When and why do they perceive increased/decreased need for critical thinking due to GAI?

Obviously, the main thing to define here is the term ‘critical thinking‘. In the survey’s context of creating products like code, marketing material and similar that has to be assessed for correctness and applicability (i.e. meeting the requirements), critical thinking mostly means reading the GAI-produced text, analyzing a generated image and testing generated code for correctness prior to signing off on it.
The first research question was often answered by those partaking in a way that suggests that critical thought was inversely correlated with how trivial the task was thought to be, and directly correlated to the potential negative repercussions of flaws. Another potential issue appeared here where some participants indicated accepting GAI responses which were outside that person’s domain knowledge, yet often lacking the means or motivation to verify claims.
The second question got a bit more of a diverse response, mostly depending on the kind of usage scenario. Although many participants indicated a reduced need for critical thinking, it was generally noted that GAI responses cannot be trusted and have to be verified, edited and often adjusted with more queries to the GAI system.
Distribution of perceived effort when using a GAI tool. (Credit: Hao-Ping Lee et al., 2025)
Of note is that this is about the participant’s perception, not about any objective measure of efficiency or accuracy. An important factor the study authors identify is that of self-confidence, with less self-confidence resulting in the person relying more on the GAI to be correct. Considering that text generated by a GAI is well-known to do the LLM equivalent of begging the question, alongside a healthy dose of bull excrement disguised as forceful confidence and bluster, this is not a good combination.
It is this reduced self-confidence and corresponding increase in trust in the AI that also reduces critical thinking. Effectively, the less the workers know about the topic, and/or the less they care about verifying the GAI tool output, the worse the outcome is likely to be. On top of this comes that the use of GAI tools tends to shift the worker’s activity from information gathering to information verification, as well as from problem-solving to AI-output integration. Effectively the knowledge worker thus becomes more of a GAI quality assurance worker.
Essentially Automation
Baltic Aviation Academy Airbus B737 Full Flight Simulator (FFS) in Vilnius (Credit: Baltic Aviation Academy)
The thing about GAI and its potential impacts on the human workforce is that these concerns are not nearly as new as some may think it is. In the field of commercial aviation, for example, there has been a strong push for many decades now to increase the level of automation. Over this timespan we have seen airplanes change from purely manual flying to today’s glass cockpits, with autopilots, integrated checklists and the ability to land autonomously if given an ILS beacon to lock onto.
While this managed to shrink the required crew to fly an airplane by dropping positions such as the flight engineer, it changed the task load of the pilots from actively flying the airplane to monitoring the autopilot for most of the flight. The disastrous outcome of this arrangement became clear in June of 2009 when Air France Flight 447 (AF447) suffered blocked pitot tubes due to ice formation while over the Atlantic Ocean. When the autopilot subsequently disconnected, the airplane was in a stable configuration, yet within a few minutes the pilot flying had managed to put the airplane into a fatal stall.
Ultimately the AF447 accident report concluded that the crew had not been properly trained to deal with a situation like this, leading to them not identifying the root cause (i.e. blocked pitot tubes) and making inappropriate control inputs. Along with the poor training, issues such as the misleading stopping and restarting of the stall alarm and unclear indication of inconsistent airspeed readings (due to the pitot tubes) helped to turn an opportunity for clear, critical thinking into complete chaos and bewilderment.
The bitter lesson from AF447 was that as good as automation can be, as long as you have a human in the loop, you should always train that human to be ready to replace said automation when it (inevitably) fails. While not all situations are as critical as flying a commercial airliner, the same warnings about preparedness and complacency apply in any situation where automation of any type is added.
Not Intelligence
A nice way to summarize GAI is perhaps that they’re complex tools that can be very useful but at the same time are dumber than a brick. Since these are based around probability models which essentially extrapolate from the input query, there is no reasoning or understanding involved. The intelligence bit is the one ingredient that still has to be provided by the human intelligence that sits in front of the computer. Whether it’s analyzing a generated image to see that it does in fact show the requested things, criticizing a generated text for style and accuracy, or scrutinizing generated code for accuracy and lack of bugs, these are purely human tasks without substitution.
We have seen in the past few years how relying on GAI tends to get into trouble, ranging from lawyers who don’t bother to validate (fake) cited cases in a generated legal text, to programmers who end up with 41% more bugs courtesy of generated code. Of course in the latter case we have seen enough criticisms of e.g. Microsoft’s GitHub Copilot back when it first launched to be anything but surprised.
In this context this recent survey isn’t too surprising. Although GAI tools are just that, like any tool you have to properly understand them to use them safely. Since we know at this point that accuracy isn’t their strong suit, that chat bots like ChatGPT in particular have been programmed to be pleasant and eager to please at the cost of their (already low) accuracy, and that GAI generated images tend to be full of (hilarious) glitches, the one conclusion one should not draw here is that it is fine to rely on GAI.
Before ChatGPT and kin, we programmers would use forums and sites like StackOverflow to copy code snippets for. This was a habit which would introduce many fledging programmers to the old adage of ‘trust, but verify’. If you cannot blindly trust a bit of complicated looking code pilfered from StackOverflow or GitHub, why would you roll with whatever ChatGPT or GitHub Copilot churns out? […]

Uncategorized

Safer and More Consistent Woodworking With a Power Feeder

Woodworking tools like table- and bandsaws are extremely useful and versatile, but they generally have the distinct disadvantage that they make no distinction between the wood and the digits of the person using the machine. While solutions like SawStop were developed to make table saws sense flesh and try to not cut it, [James Hamilton] makes a compelling argument in a recent video for the use of power feeders.
These devices are placed above the table and feed the material into the machine without having to get one’s digits anywhere near the machine. Other than the safety aspect, it also means that the material is always fed in at a consistent speed, which is great when using it with a router table. Most of these power feeders are portable, so a single unit can be moved from the table saw to the router table, with [James] showing how he is using MagSwitch magnetic clamps to ease the process of moving between machines.
With the 8 HP mini power feeder that he’s using, the 4 magnetic clamps appear to be enough even when cutting hardwood on the table saw, but it’s important to make sure the power feeder doesn’t twist while running, for obvious safety reasons. On [James]’s wish list is a way to make moving the power feeder around more efficient, because he only has a single one, for cost reasons.
Although these power feeders cost upwards of $1,000, the benefits are obvious, including when running larger jobs. One might conceivably also DIY a solution, as they appear to be basically an AC motor driving a set of wheels that grip the material while feeding. That said, do you use a power feeder, a SawStop table saw or something else while woodworking?

[embedded content] […]

Uncategorized

Plastic On The Mind: Assessing the Risks From Micro- and Nanoplastics

Perhaps one of the clearest indications of the Anthropocene may be the presence of plastic. Starting with the commercialization of Bakelite in 1907 by Leo Baekeland, plastics have taken the world by storm. Courtesy of being easy to mold into any imaginable shape along with a wide range of properties that depend on the exact polymer used, it’s hard to imagine modern-day society without plastics.
Yet as the saying goes, there never is a free lunch. In the case of plastics it would appear that the exact same properties that make them so desirable also risk them becoming a hazard to not just our environment, but also to ourselves. With plastics degrading mostly into ever smaller pieces once released into the environment, they eventually become small enough to hitch a ride from our food into our bloodstream and from there into our organs, including our brain as evidenced by a recent study.
Multiple studies have indicated that this bioaccumulation of plastics might be harmful, raising the question about how to mitigate and prevent both the ingestion of microplastics as well as producing them in the first place.

Polymer Trouble
Plastics are effectively synthetic or semi-synthetic polymers. This means that the final shape, whether it’s an enclosure, a bag, rope or something else entirely consists of many monomers that polymerized in a specific shape. This offers many benefits over traditional materials like wood, glass and metals, all of which cannot be used for the same wide range of applications, including food packaging and modern electronics.
Photodegradation of a plastic bucket used as an open-air flowerpot for some years. (Credit: Pampuco, Wikimedia)
Unlike a composite organic polymer like wood, however, plastics do not noticeably biodegrade. When exposed to wear and tear, they mostly break down into polymer fragments that remain in the environment and are likely to fragment further. When these fragments are less than 5 mm in length, they are called ‘microplastics’, which are further subdivided into a nanoplastics group once they reach a length of less than 1 micrometer. Collectively these are called MNPs.
The process of polymer degradation can have many causes. In the case of e.g. wood fibers, various microorganisms as well as chemicals will readily degrade these. For plastics the primary processes are oxidation and chain scission, which in the environment occurs through UV-radiation, oxygen, water, etc. Some plastics (e.g. with a carbon backbone) are susceptible to hydrolysis, while others degrade mostly through the interaction of UV-radiation with oxygen (photo-oxidation). The purpose of stabilizers added to plastics is to retard the effect of these processes, with antioxidants, UV absorbers, etc. added. These only slow down the polymer degradation, naturally.
In short, although plastics that end up in the environment seem to vanish, they mostly break down in ever smaller polymer fragments that end up basically everywhere.
Body-Plastic Ratio
In a recent review article, Dr. Eric Topol covers contemporary studies on the topic of MNPs, with a particular focus on the new findings about MNPs found in the (human) brain, but also from a cardiovascular perspective. The latter references a March 2024 study by Raffaele Marfella et al. as published in The New England Journal of Medicine. In this study the excised plaque from carotid arteries in patients undergoing endarterectomy (arterial blockage removal) was examined for the presence of MNPs prior to the patients being followed to see whether the presence of MNPs affected their health.
What they found was that of the 257 patients who completed the full study duration 58.4% had polyethylene (PE) in these plaques, while 12.1% also had polyvinyl chloride (PVC) in them. The PE and PVC MNPs were concentrated in macrophages, alongside active inflammation markers. During the follow-up period during the study, of the patients without MNPs 8 of 107 (7.5%) suffered either a nonfatal myocardial infarction, a nonfatal stroke or death. This contrasted with 30 of 150 (20%) in the group with MNP detected, suggesting that the presence of MNP in one’s cardiovascular system puts one at significantly higher risk of these adverse events.
Microplastics in the human body. (Credit: Richard C. Thompson et al., Science, 2024)
The presence of MNPs has not only been confirmed in arteries, but effectively in every other organ and tissue of the body as well. Recently the impact on the human brain has been investigated as well, with a study in Nature Medicine by Alexander J. Nihart et al. investigating MNP levels in decedent human brains as well the liver and kidneys. They found mostly PE, but also other plastic polymers, with the brain tissue having the highest PE proportion.
Interestingly, the more recently deceased had more MNP in their organs, and the brains of those with known dementia diagnosis had higher MNP levels than those without. This raises the question of whether the presence of MNPs in the brain can affect or even induce dementia and other disorders of the brain.
Using mouse models, Haipeng Huang et al. investigated the effects of MNPs on the brain, demonstrating that nanoplastics can pass through the blood-brain barrier, after which phagocytes consume these particles. These then go on to form blockages within the capillaries of the brain’s cortex, providing a mechanism through which MNPs are neurotoxic.
Prevention
Clearly the presence of MNPs in our bodies does not appear to be a good thing, and the only thing that we can realistically do about it at this point is to prevent ingesting (and inhaling) it, while preventing more plastics from ending up in the environment where it’ll inevitably start its gradual degradation into MNPs. To accomplish this, there are things that can be done, ranging from a personal level to national and international projects.
On a personal level, wearing a respirator while being in dusty environments, while working with plastics, etc. is helpful, while avoiding e.g. bottled water. According to a recent study by Naixin Qian et al. from the University of California they found on average 240,000 particles of MNPs in a liter of bottled water, with 90% of these being nanoplastics. As noted in a related article, bottled water can be fairly safe, but has to be stored correctly (i.e. not exposed to the sun). Certain water filters (e.g. Brita) filter particles o.5 – 1 micrometer in size and should filter out most MNPs as well from tap water.
Another source of MNPs are plastic containers, with old and damaged plastic containers more likely to contaminate food stored in them. If a container begins to look degraded (i.e. faded colors), it’s probably a good time to stop using it for food.
That said, as some exposure to MNPs is hard to avoid, the best one can do here is to limited said exposure.
Environmental Pollution
Bluntly put, if there wasn’t environmental contamination with plastic fragments such personal precautions would not be necessary. This leads us to the three Rs:

Reduce
Reuse
Recycle

Simply put, the less plastic we use, the less plastic pollution there will be. If we reuse plastic items more often (with advanced stabilizers to reduce degradation), fewer plastic items would need to be produced, and once plastic items have no more use, they should be recycled. This is basically where all the problems begin.
Using less plastic is extremely hard for today’s societies, as these synthetic polymers are basically everywhere, and some economical sectors essentially exist because of single-use plastic packaging. Just try to imagine a supermarket or food takeout (including fast food) without plastics. A potential option is to replace plastics with an alternative (glass, etc.), but the viability here remains low, beyond replacing effectively single use plastic shopping bags with multi-use non-plastic bags.
Some sources of microplastics like from make-up and beauty products have been (partially) addressed already, but it’d be best if plastic could be easily recycled, and if microorganisms developed a taste for these polymers.
Dismal Recycling
Currently only about 10-15% of the plastic we produce is recycled, with the remainder incinerated, buried in landfills or discarded as litter into the environment as noted in this recent article by Mark Peplow. A big issue is that the waste stream features every imaginable type of plastic mixed along with other (organic) contaminants, making it extremely hard to even begin to sort the plastic types.
The solution suggested in the article is to reduce the waste stream back to its original (oil-derived) components as much as possible using high temperatures and pressures. If this new hydrothermal liquefaction approach which is currently being trialed by Mura Technology works well enough, it could replace mechanical recycling and the compromises which this entails, especially inferior quality compared to virgin plastic, and an inability to deal with mixed plastics.
Hydrothermal liquefaction process of plastics. (source: Mura Technology)
If a method like this can increase the recycling rate of plastics, it could significantly reduce the amount of landfill and litter plastic, and thus with it the production of MNPs.
Microorganism Solutions
As mentioned earlier, a nice thing about natural polymers like those in wood is that there are many organisms who specialize in breaking these down. This is the reason why plant matter and even entire trees will decay and effectively vanish, with its fundamental elements being repurposed by other organisms and those that prey on these. Wouldn’t it be amazing if plastics could vanish in a similar manner rather than hang around for a few hundred years?
As it turns out, life does indeed find a way, and researchers have discovered multiple species of bacteria, fungi and microalgae which are reported to biodegrade PET (polyethylene terephthalate), which accounts for 6.2% of plastics produced. Perhaps it’s not so surprising that microorganisms would adapt to thrive on plastics, since we are absolutely swamping the oceans with it, giving the rapid evolutionary cycle of bacteria and similar a strong nudge to prefer breaking down plastics over driftwood and other detritus in the oceans.
Naturally, PET is just one of many types of plastics, and generally plastics are not an attractive target for microbes, as Zeming Cai et al. note in a 2023 review article in Microorganisms. Also noted is that there are some fungal strains that degrade HDPE and LDPE, two of the most common types of plastics. These organisms are however not quite at the level where they can cope with the massive influx of new plastic waste, even before taking into account additives to plastics that are toxic to organisms.
Ultimately it would seem that evolution will probably fix the plastic waste issue if given a few thousand years, but before that, we smart human monkeys would do best to not create a problem where it doesn’t need to exist. At least if we don’t want to all become part of a mass-experiment on the effects of high-dose MNP exposure. […]

Uncategorized

Improving Aluminium-Ion Batteries With Aluminium-Fluoride Salt

There are many rechargeable battery chemistries, each with their own advantages and disadvantages. Currently lithium-ion and similar (e.g. Li-Po) rule the roost due to their high energy density at least acceptable number of recharge cycles, but aluminium-ion (Al-ion) may become a more viable competitor after a recently published paper by Chinese researchers claims to have overcome some of the biggest hurdles. In the paper as published in ACS Central Science by [Ke Guo] et al. the use of solid-state electrolyte, a charge cycle endurance beating LiFePO4 (LFP) and excellent recyclability are claimed.
It’s been known for a while that theoretically Al-ion batteries can be superior to Li-ion in terms of energy density, but the difficulty lies in the electrolyte, including its interface with the electrodes. The newly developed electrolyte (F-SSAF) uses aluminium-fluoride (AlF3) to provide a reliable interface between the aluminium and carbon electrodes, with the prototype cell demonstrating 10,000 cycles with very little cell degradation. Here the AlF3 provides the framework for the EMIC-AlCl3 electrolyte. FEC (fluoroethylene carbonate) is introduced to resolve electrolyte-electrode interface issues.
A recovery of >80% of the AlF3 during a recycling phase is also claimed, which for a prototype seems to be a good start. Of course, as the authors note in their conclusion, other frameworks than AlF3 are still to be investigated, but this study brings Al-ion batteries a little bit closer to that ever-elusive step of commercialization and dislodging Li-ion. […]