buescher 2 hours ago

These designs fascinate people who haven't designed antennas. I don't doubt that throwing enough computational power at optimizing antennas will produce antennas optimized for something at the expense of something else but if you're a casual what you should notice is that these papers never mention the "something elses". You can get a paper out of just about any antenna design, btw. There's also a type of ham that will tune up a bedframe or whatever. So just getting something to radiate should not be confused with advancing the state of the art.

These antennas found their way into the utterly savage "pathological antennas" chapter of Hansen and Collin's _Small Antenna Handbook_. See "random segment antennas". Hansen and Collin is the book to have on your shelf if you're doing any small antennas commercially and that chapter is the chapter to go to when you're asked "why don't you just".

  • xraystyle an hour ago

    This comment really sums it up well. Literally everything with antenna design is a trade-off. You can design an antenna to radiate very well at a given wavelength. The better it is at doing this, the worse it tends to be at every other wavelength. You can make an antenna that radiates to some degree across a wide array of wavelengths, but it's not actually going to work very well across any of them.

    Same thing with radiation patterns. You can make a directional antenna that has a huge amount of gain in one direction. The trade-off is that it's deaf and dumb in every other direction. (See a Yagi-Uda design, for instance.)

    Physics is immutable and when it comes to antenna design there really is no such thing as free lunch. Other than coming up with some wacky shapes I don't really think AI is going to be able to create any type of "magic" antenna that's somehow a perfect isotropic radiator with a low SWR across some huge range of wavelengths.

    • buescher 16 minutes ago

      There's always a market for a better free lunch.

  • os2warpman 31 minutes ago

    Not only are they pathological, but when you order a example be built because CST confirmed that the design would kick ass and you put it in the chamber and actually measure it for real, you walk away wondering why you wasted so much time and money.

    • buescher 16 minutes ago

      I wish I'd had a copy of the book referenced much earlier in my career too.

      As far as the twisted-paperclip antennas go, just imagine trying to verify each of those 3D bends was to spec. Or conversely, running a monte carlo on all the degrees of freedom in that design.

  • jamesholden an hour ago

    "Do not confuse inexperience with creativity"...

    • osm3000 an hour ago

      I just read it in the referenced book section from the parent comment. It shocked the imaginary bubble where my mind is a bit. I want to reflect more on it.

      Somehow, in the midset of all these LLM and diffusion models, the only thing that seems to catch attention is creativity. I've not thought of experience.

      • buescher 2 minutes ago

        Experience makes creativity harder, but that's what mature creativity is. Did anyone tell you it wouldn't be work?

        The people who are most awed by LLMs are those people most used to having to be merely plausible, not correct.

lormayna 2 hours ago

As somebody who almost fried his computer during the antenna design course to optimize a dipoles array with a (not optimized) genetic algorithm, I really like this content.

deerstalker 3 hours ago

Very cool. Evolutionary Algorithms have kinda been out of the mainstream for a long time. They are good when you can do a lot of "black-box" function evaluations but kinda suck when your computational budget is limited. I wonder if coupling them with ML techniques could bring them back.

  • bob1029 2 hours ago

    > I wonder if coupling them with ML techniques could bring them back.

    EAs are effectively ML techniques. It's all a game of search.

    The biggest problem I have seen with these algorithms is that they are wildly irrespective of the underlying hardware that they will inevitably run on top of. Koza, et. al., were effectively playing around in abstraction Narnia when you consider how impractical their designs were (are) to execute on hardware.

    An L1-resident hill climber running on a single Zen4+ thread would absolutely smoke every single technique from the 90s combined, simply because it can explore so much more of the search space per unit time. A small tweak to this actually shows up on human timescales and so you can make meaningful iterations. Being made to wait days/weeks each time you want to see how your idea plays out will quickly curtail the space of ideas.

    • munksbeer 2 hours ago

      > A small tweak to this actually shows up on human timescales and so you can make meaningful iterations.

      Please could you explain what you meant by this part? I'm trying and failing to understand it.

  • antegamisou 15 minutes ago

    Much more robust than almost all modern ML algorithms which let's be real, aren't exactly applicable to anything outside recommendation systems and 2D image processing.

  • nickpsecurity 35 minutes ago

    They're not in the press a lot. They're probably still in production behind the scenes. I was reading about using them for scheduling not long ago. Btw, a toy one I wrote to show how they work got best results with tournament selection with significant mutations (closer to 20%).

    There's a lot of problems where you're searching among many possibilities in a space that has lots of pieces in each solution. If you can encode the solution and fitness, a GA can give you an answer if you play with the knows enough. You also might not need to be an expert in that domain, like writing heuristics. If you know some, they might still help.

  • henning an hour ago

    The main use of evolutionary algorithms in machine learning currently is architecture search for neural networks. There's also work on pipeline design, finding the right way to string things together.

    Neural networks already take a long time to train so throwing out gradient descent entirely for tuning weights doesn't scale great.

    Genetic programming can solve classic control problems with a few instructions when they can solve it, so that's cool.

supportengineer 2 hours ago

A long time dream of rocket scientists is single-stage-to-orbit. Ideally you'd have a vehicle that takes off and lands like a conventional jet plane at a regular airport. I've always thought that perhaps AI and evolutionary algorithms might be able to navigate a way through the various tradeoffs and design constraints that have stopped us so far.

  • Zigurd an hour ago

    As an avid observer of rocket design, I suppose that hasn't happened because SSTO may not have any good solutions. I further suppose that the design parameters are so constrained there is very little opportunity for a generative or evolutionary, or any other AI-driven design approach, to do more than optimize some components.

  • dumdedum123 2 hours ago

    As a rocket scientist I assure you it's been tried

qoez 3 hours ago

This has to have been done in more modern times in simulation of the EM field for a better design instead of practically

babuloseo an hour ago

Do people not go on Wikipedia nowadays? This is literally on the frontpage of the wiki for this stuff: https://en.wikipedia.org/wiki/Genetic_algorithm

  • JohnKemeny an hour ago

    A good rule of thumb: never mock someone’s enthusiasm or excitement about learning something, even if it’s old news to you. Let people enjoy discovering things.

    • JWLong 44 minutes ago

      I'm rediscovering it. I remember reading about this in some flashy, superlative Popular Science article, from the early/mid 2000s. So I was quite excited to click on the link and see that shape again.

      But also, something something lucky ten thousand.

      https://xkcd.com/1053/