Bubble/Not bubble, what does that really change? The economy will rise and fall one way or another; it is really in cycles. If the bubble pops, it will be a sharper fall. Unless you own AI, tech stocks - probably not a big deal
AWS/GCP/Azure Cloud turns the audit beast into a house-cat: one IAM rule, one log stream, one firewall and etc.
Otherwise, you need to fill out a lot of documents to prove that your bare metal is safe to host, for example, cardholder data.
More like an actor engine than IFTTT, a simple and good tool that can simplify workflows like "daily export data from HubSpot to Google spreadsheet, then send emails."
The article is about intentional killing XSLT/XML in the browser. I think it is evolutionary: devs switched to JSON, AI agents don't care at all - they can handle anything; XML just lost naturally, like GOPHER
The problem is not XML vs. JSON. This is not about choosing the format to store a node app's configuration. This is about an entire corpus of standards, protocols that depend on this. The root problem for me is:
1) Google doing whatever they want with matters that affect every single human on the planet.
2) Google running a farse with "public feedback" program where they don't actually listen, or in this case ask for feedback after the fact.
3) Google not being truthful or introspective about the reasons for such a change, especially when standardized alternatives have existed for years.
4) Honestly, so much of standard, interoperable "web tech" has been lost to Chrome's "web atrocities" and IE before that... you'd think we've learned the lesson to "never again" have a dominant browser engine in the hands of a "for profit" corp.
The narrative would be more compelling to me if Google didn’t fail to impose their technology on the web so many times.
NaCL? Mozilla won this one. Wasm is a continuation of asm.js.
Dart? It now compiles to Wasm but has mostly failed to replace js while Typescript filled the niche.
Sure, Google didn’t care much for XML. They had a proper replacement for communication and simple serialisation internally in protobuf which they never actually try to push for web use. Somehow json ended up becoming the standard.
I personally don’t give much credit to the theory of Google as a mastermind patiently under minding the open web for years via the standards.
Now if we talk about how they have been pushing Chrome through their other dominant products and how they have manipulated their own products to favour it, I will gladly agree that there is plenty to be said.
> NaCL? Mozilla won this one. Wasm is a continuation of asm.js.
And yet the design of wasm is the way it is to a large extent because of V8 limitations and Google's pushback on having to do any substantial changes for the sake of a clean design.
Yes, this is the real issue, and it is a pity so many comments delve into json vs. xml and not on the title stating that "google is killing the open web". A new stage of the web is forming where Big Tech AI isn't just chatbots, but matured to offer fully operational end-to-end services. All AI-operated and served, up to tailor-made domain-specific UI. Then the corporations, winners in their market, don't have a need for open web anymore to slurp data from. All open web data absorbed, now fresh human creativity flows in exclusively via these service, directly feeding the AI systems.
There are a lot of comments focusing more on the specifics of XML and XSLT because that's what much of the article laboriously drones on about, despite its general title.
Lost? The format is literally everywhere and a few more places. Hard to say something lost when it's so deeply embedded all over the place. Sure, most developers today reach for JSON by default, but I don't think that means every other format "lost".
Not sure why there is always such a focus on who is the "winner" and who is the "loser", things can co-exists just fine.
Immaterial. If the answer is either 'yes' or 'no', it makes no actual difference: gopher still exists, is still a thing, is still successful. It feels like you're just trying to move the goal-posts and redefine what 'lose' means and trying to lure the poster into a "gotcha".
It's not about a "gotcha."
Browsers once supported the GOPHER protocol but dropped it around a decade ago. This serves as an analogy: if users don't use XSLT/XML daily, browsers may eventually drop support for XSLT - supporting features cost money
That's not a great analogy. Firefox once supported RSS feeds as live bookmarks and dropped it, and not because people didn't use it, because people did use it and bemoaned its loss for years afterwards.
This is a "cope" argument. GP doesn't mean literally no one uses it; they mean very few people use it. Yes, there are people using RSS/XML, but that proportion is (will be?) 0% when rounded to the nearest Nth decimal. They are, unfortunately, insignificant.
+1 I think XML "lost" some time ago. I really doubt anyone would chose to use it for anything new these days.
I think, from my experience at least, that we keep getting these "component reuse" things coming around "oh you can use Company X's schema to validate your XML!" "oh you can use Company X's custom web components in your web site!" etc etc yet it rarely if ever seems to be used. It very very rarely ever feels like components/schemas/etc can be reused outside of their intended original use cases, and if they can they are either so trivially simple it's hardly worth the effort, or they are so verbose and cumbersome and abstracted trying to be all things to all people then it is a real pain to work with. (And for the avoidance of doubt I don't mean things like tailwind et Al here)
I'm not sure who keeps dreaming these things up with this "component reuse" mentality but I assume they are in "enterprise" realms where looking busy and selling consulting is more important than delivering working software that just uses JSON :)
It may be that nobody would choose XML as the base for their new standard. But there are a ton of existing standards built around XML that are widely used and important today. RSS, GPX, XMPP, XBRL, XSLT, etc. These things aren't being replaced with JSON-based open standards. If they die, we will likely be left without any usable open standards in their respective niches.
Looking at the list, what actually jumps out at me is there is probably a gap in the world of standards for a JSON-based replacement to RSS. Looking it up someone came up with the idea of https://www.jsonfeed.org/ and hopefully it gains traction.
In hindsight, it is hard to imagine a JSON-based RSS-style standard struggling to catch. The first project every aspiring JS developer would be doing is how to add a feed to their website.
probably nobody would choose it for anything new because the sweet spots for XML usage have all already been taken, that said if someone was to say hey we need to redo some of these standards they can of course find ways to make JSON work for some standards that are XML nowadays, but for a lot of them JSON would be the absolute worst and if you were redoing them you would use XML to redo.
basically anything that needs to combine structured and unstructured data and switch between the two at different parts of your tree are probably better represented as XML.
I do. That's why I have a browser render it to a format that makes sense for human consumption.
Granted, html actually makes sense in the xml-ish (I don't remember if it's technically compliant), since it weaves formatting into semantically uninterrupted text.
If that's not the case, I don't see a real benefit to use XML over anything sane (not yaml... Binary formats depending on use case)
>I do. That's why I have a browser render it to a format that makes sense for human consumption.
I guess if that's the standard then reading any data format is also a slog because hey, most data and document formats get rendered as something for "human consumption" but that said when one is a programmer one often has to read the format without the rendering, and so, your witty reply aside I guess you must find this task where HTML is concerned a slog.
This is too bad because most mixed content formats like EAD, HTML etc. are like that, and if you want humans to be able to write the content with say a paragraph, but inside that paragraph is a link etc. you're going to write it mixed content, because that works best based on millions of programmer and editor hours over decades and JSON would be crap for it.
Is it super great, nope it's only the best way of writing document formats (with highly technical and mix of structured and unstructured content) that we currently know of, in the same way that Democracy is the worst form of politics except for all the all others and multiple other examples of things that suck in the world but are better than all the alternatives.
I didn't say EAD was great, I said it was better than JSON for what it needed to do, part of which is having humans write mixed content.
Believe me I have certainly seen people who have been JSON enthusiasts try to replicate mixed content type documents in JSON and it has always ended up looking at least as bad as any XML but without all the tooling to make it easier to write XML and with a tendency to brittleness because in doing mixed content in JSON you are going to have to do a lot of character escaping.
I'm going to end off here with the observation that I doubt you are actually acquainted with the workflows of editors, writers, publishing industries and the use of markup formats in any sort of long running type of company using these things? I just have a feeling on this matter. You seem like your technical area of expertise is not in the area you are critiquing? Some of these companies are actually quite technically advanced, so I'm just putting that out there that you might not be as aware of the requirements of parts of the world that use things that you would build in a superior manner if only you were given the task to do so.
XML might have “lost” but it’s still a format being used by many legacy and de novo projects. Transform libraries are also alive and well, some of them coming with hefty price tags.
LLMs produce much more consistent XML than JSON because JSON is a horrible language that can be formatted in 30 different ways with tons of useless spaces everywhere, making for terrible next token prediction.
I have a hilarious example of this. I was hired to consult at a large company that had "configurators" which were applications that decided all sorts of things for if you were building a new factory and needed to use this company's stuff for your factory, so for example one configurator would be a search for replacement part in your area - so if you are building a factory in Asia but you want to use a particular part but that part is has export restrictions for it from the U.S where it is manufactured you would use this tool to pick out the appropriate replacement part made somewhere in Asia.
They had like 50 different configurators built at different times using different tech etc. (my memory is a bit fuzzy here as to how many they had etc. but it was a lot) So of course they wanted to make a solution for putting their codebase together and also make it easy to make new configurators.
So they built a React application to take a configurator input format that would tell you how to build this application and what components to render and blah blah blah etc.
Cool. But the configurator format was in JSON so they needed to make an editor for their configurator format.
They didn't have a schema or anything like this they made up the format as they went along, and they designed the application as they went along by themselves, so application designed by programmers with all the wonder that description entails.
That application at the end was just a glorified tree editor that looked like crap and of course had all sorts of functionality behavior and design mixed in with its need to check constraints for outputting a particular JSON structure at a particular point. Also programmed in React.
There was about 10 programmers, including several consultants who had worked on this for over a year when I came along, and they were also shitting bricks because they had only managed to port over 3 configurators, and every time they ported a new one they needed to add in new functionality to the editor and the configurator compiler, and there was talk of redesigning the whole configurator editor cause it sucked to use.
Obviously the editor part should have been done in XML. Then people could have edited the XML by learning to use XML spy, they could have described their language in XML schema real easy, and so forth.
But no they built everything in React.
The crowning hilarity - this application at most would ever be used by about 20 people in the world and probably not more than 10 people at all.
I felt obligated by professional pride (and also by the fact that I could see no way could this project keep being funded indefinitely so it was to my benefit to make things work) to explain how XML would be a great improvement over this state of affairs but they wouldn't hear of it.
After about 3 months on it was announced the project would be shut down in the next year. All that work wasted on an editor that could probably have been done by one expert in a month's time.
XML is not a file format only. It's a complete ecosystem built around that file. Protocols, verifiers, file formats built on top of XML.
You can get XML and convert it to everything. I use it to model 3D objects for example, and the model allows for some neat programming tricks while being efficient and more importantly, human readable.
Except being small, JSON is worst of both worlds. A hacky K/V store, at best.
Calling XML human readable is a stretch. It can be with some tooling, but json is easier to read with both tooling and without.
There's some level of the schema being relevant to how human readable the serialization is, but I know significantly fewer people that can parse an XML file by sight than json.
Efficient is also... questionable. It requires the full turing machine power to even validate iirc. (surely does to fully parse).
by which metric is XML efficient?
By efficiency, I mean it's text and compresses well. If we mean speed, there are extremely fast XML parsers around see this page [0] for state of the art.
For hands-on experience, I used rapidxml for parsing said 3D object files. A 116K XML file is parsed instantly (the rapidxml library's aim is to have speed parity with strlen() on the same file, and they deliver).
Converting the same XML to my own memory model took less than 1ms including creation of classes and interlinking them.
This was on 2010s era hardware (a 3rd generation i7 3770K to be precise).
Verifying the same file against an XSLT would add some milliseconds, not more. Considering the core of the problem might took hours on end torturing memory and CPU, a single 20ms overhead is basically free.
I believe JSON and XML's readability is directly correlated with how the file is designed and written (incl. terminology and how it's formatted), but to be frank, I have seen both good and bad examples on both.
If you can mentally parse HTML, you can mentally parse XML. I tend to learn to parse any markup and programming language mentally so I can simulate them in my mind, but I might be an outlier.
If you're designing a file format based on either for computers only, approaching Perl level regular expressions is not hard.
Because issue with XML are pretty much never sanity check. After all XML is pretty much never written by hand but by tools which will most likely produce valid xml.
Most of the time you will actually be debugging what’s inside the file to understand why it caused an issue and find if that comes from the writing or receiving side.
It’s pretty much like with a binary format honestly. XML basically has all the downside of one with none of the upside.
I mean, I found it pretty trivial to write parsers for my XML files, which are not simple ones, TBH. The simplest one of contains a bit more than 1700 lines.
It's also pretty easy to emit, "I didn't find what I'm looking for under $ELEMENT" while parsing the file, or "I expected a string but I got $SOMETHING at element $ELEMENT".
Maybe I'm distorted because I worked with XML files more than decade, but I never spent more than 30 seconds while debugging an XML parsing process.
Also, this was one of the first parts I "sealed" in the said codebase and never touched it again, because it worked, even if the coming file is badly formed (by erroring out correctly and cleanly).
> It's also pretty easy to emit, "I didn't find what I'm looking for under $ELEMENT" while parsing the file, or "I expected a string but I got $SOMETHING at element $ELEMENT".
I think we are actually in agreement. You could do exactly the same with a binary format without having to deal with the cumbersomeness of xml which is my point.
You are already treating xml like one writing errors in your own parsers and "sealing" it.
Telling the parser to navigate to first element named $ELEMENT, checking a couple of conditions and assigning values in a defensive manner is not cumbersome in my opinion.
I would not call parsing binary formats cumbersome (I'm a demoscene fan, so I aspire to match their elegance and performance in my codebases), but not the pragmatic approach for that particular problem at hand.
So, we arrive to your next question:
> What’s the added value of xml then?
It's various. Let me try to explain.
First of all, it's a self documenting text format. I don't need an extensive documentation for it. I have a spec, but someone opening it in a text editor can see what it is, and understand how it works. When half (or most) of the users of your code are non-CS researchers, that's a huge plus.
Talking about non-CS researchers, these folks will be the ones generating these files from different inputs. Writing an XML in any programming language incl. FORTRAN and MATLAB (not kidding) is 1000 times easier and trivial than writing a binary blob.
Expanding that file format I have developed on XML is extremely easy. You change a version number, and maybe add a couple of paths to your parser, and you're done. If you feel fancy, allow for backwards compatibility, or just throw an error if you don't like the version (this is for non-CS folks mostly. I'm not that cheap). I don't need to work with nasty offsets or slight behavior differences causing to pull my hairs out.
The preservation is much easier. Scientific software rots much quicker than conventional software, so keeping file format readable is better for preservation.
"Sealing" in that project's parlance means "verify and don't touch it again". When you're comparing your results with a ground truth with 32 significant digits, you don't poke here and there leisurely. If it works, you add a disclaimer that the file is "verified at YYYYMMDD", and is closed for modifications, unless necessary. Same principle is also valid for performance reasons.
So, building a complex file format over XML makes sense. It makes the format accessible, cross-platform, easier to preserve and more.
Serialize that to a JavaScript object, then tell me, is "AnElement" a list or not?
That's one of the reasons why XML is completely useless on the web. The web is full of XML that doesn't have a schema because writing one is a miserable experience.
Most parsers have type aware parsing, so that if somebody tucks string to a place where you expect integer, you can get an error or nil or "0" depending on your choice.
I had the displeasure of parsing XML documents (into Rust) recently. I don't ever want to do this again.
JSON for all it's flaws is beautifully simple in comparison. A number is either a number or the document is invalid. Arrays are just arrays and objects are just objects.
XML on the other hand is the wild west. This particular XML beast had some difficulty sticking to one thing.
Take for instance lists. The same document had two different ways to do them:
Various values were scattered between attributes and child elements with no rhyme or reason.
To prevent code reuse, some element names were namespaced, so you might have <ThingName /> and <FooName />.
To round off my already awful day, some numbers were formatted with thousands separators. Of course, these can change depending on your geographical location.
Now, one could say that this is just the fault of the specific XML files I was parsing. And while I would partially agree, the fact that a format makes this possible is a sign of it's quality.
Since there's no clear distinction between objects and arrays you have to pick one. Or multiple.
Since objects can be represented with both attributes and children you have to pick one. Or both.
Since there are no numbers in XML, you can just write them out any way you want. Multiple ways is of course preferable.
There's a trade-off and tension between simplicity and flexibility. In the recent days the post titled "I prefer RST over Markdown" has surfaced again [0][1], showing the same phenomenon clearly.
Simple formats are abuse-proof because of their limitations, and it makes perfect sense in some cases (I'm a Markdown fan, for example, but prefer LaTeX for serious documents). Flexible formats are more prone to abuse and misuse. XML is extremely flexible and puts the burden of designing and sanity checking the file to the producers and consumers of the file format in question. This is why it has a couple of verification standards built on top of it.
I personally find very unproductive to yell at a particular file format because it doesn't satisfy some users' expectation out of the box. The important distinction is whether it provides the capability to address those or not. XML has all the bells and whistles and then some to craft sane, verifiable and easily parseable files.
I also strongly resist that the notion of making everything footgun proof. Not only it stifles creativity and progress, it makes no sense. We should ban all kinds of blades, then. One shall read the documentation of the thing they are intending to handle before starting. The machine has no brain, we shall use ours instead.
I'm also guilty of it myself. Some of my v1 code holds some libraries very wrong, but at least I reread the docs and correct the parts iteration by iteration (and no, I don't use AI in any form or shape for learning and code refactoring).
So if somebody misused any format and made it miserable to parse, I'd rather put the onus on the programmer who implemented the file format on top of that language, not the language itself (XML is a markup language).
The only file format I don't prefer to use is YAML [2]. The problem is its "always valid" property. This puts YAML into "Risk of electric shock. Read the manual and read it again before it operate this" category. I'm sure I can make it work if I need to, but YAML's time didn't come for me, yet. I'd rather use INI or TOML (INI++) for configuring things.
The platform I use doesn't give statistics on that (I don't host my blog), but I assume the number is >0, since there's a lot of good browser based and free RSS readers.
And I don't care at all about the feelings of AI agents. That a tool that's barely existed for 15 minutes doesn't need a feature is irrelevant when talking about whether or not to continue supporting features that have been around for decades.
Agreed. Having actually built and deployed an app that could render entirely from XML with XSLT in the browser: I wouldn't do it again.
Conceptually it was beautiful: We had a set of XSL transforms that could generate RSS, Atom, HTML, and a "cleaned up" XML from the same XML generated by our frontend, or you could turn off the 2-3 lines or so of code used to apply the XSL on the server side and get the raw XML, with the XSLT linked so the browser would apply it.
Every URL became an API.
I still like the idea, but hate the thought of using XSLT to do it. Because of how limited it is, we ended up having e.g. multiple representations of dates in the XML because trying to format dates nicely in XSLT for several different uses was an utter nightmare. This was pervasive - there was no realistic prospect of making the XML independent of formatting considerations.
XSLT is much nicer to use if you just create a very simple templating language that compiles to XSLT. Subset of XLST already has a structure of typical templating language. It can even be done with regexps.
Then simplicity becomes a feature. You can write your page in pretty much pure HTML, or even pure HTML if you use comments or custom tags for block markers. Each template is simple and straightforward to write and read.
And while different date format seems to be a one off thing you'd prefer to deal with as late as possible in the stack, if you think broader, like addressing global audience in their respective languages and cultures, you want to support that on the server so the data (dates, numbers, labels) lands on the client in the correct language and culture. Then doing just dates and perhaps numbers in the browser is just inconsistent.
> You can write your page in pretty much pure HTML, or even pure HTML if you use comments or custom tags for block markers.
That's exactly what we didn't want. The XSL encoded the view. The "page" was a pure semantic representation of the data in XML that wherever possible were direct projections from the models stored and serialied internally in our system, and the XSL generated each of the different views - be it HTML, RSS, Atom, or a condensed/simplified XML view. The latter was necessary largely because the "raw" XML data was more verbose than needed due to the deficiencies of XSL.
It's possible it'd be more pleasant to use XSL your way, but that way wouldn't have solved any issues we had a need to solve.
> you want to support that on the server so the data (dates, numbers, labels) lands on the client in the correct language and culture.
That would've meant the underlying XML would need to mix view and model considerations, which is exactly what we didn't want.
Today I'd simply use a mix of server-side transformations, CSS, and web components to achieve the same thing rather than try to force XSL to work for something it's so painful to use for.
Sorry, I misspoke. What I had was that contents of the page were served to the browser as XML. The browser automatically requested the appropriate XSLT to convert the XML to XHTML to display it nicely. Basically same thing that you had, except that I didn't need feeds.
What I wanted to say is that I didn't write XSLT by hand. Instead I was writing XHTML files with just few short, convenient markers, and my regexps based "compiler" converted them to various xls nasty tags generating output XSLT that was served.
For example `$name` was converted to `<xsl:value-of select="name" />` and `@R:person` was converted to `<xsl:for-each select="person">` and `@E` was converted to `</xsl:for-each>`
Basically there were 5 tags that were roughly equivalent to
with, for, else, if, end
`with` descended into a child in XML tree if the child was present and displayed enclosed XHTML
`for` displayed copy of the enclosed XHTML for each copy of its argument present in XML, descending into them
`else` displayed enclosed XHTML only if the argument node didn't exist in XML
`if` displayed enclosed XHTML Argument only if the element existed
Neither if nor else descended into the argument node in XML. With and for descended. XSLT was using relative paths everywhere.
`end` marked the end of each respective block.
> That would've meant the underlying XML would need to mix view and model considerations, which is exactly what we didn't want.
The thing is, what you are serving to the browser is always a View-Model. If you serve a Model instead you are just "silently casting" it onto a View-Model because at that moment they are identical. Sooner or later the need will arise to imbue this with some presentation specific information attached on the fly to data from your data layer Model.
I no longer try to use XSLT either, but I think web components are completely orthogonal tool. Your XSLT could still generate web components if you like. You could even "hydrate" generated HTML with React for interactivity.
What XML+XSLT was solving was basically skipped entirely by programmers. Instead of taking care about separating concerns, performance, flexibility they just thrown into the browser a server side generated soup that is only one step away from compiled binary and called it a day.
> The thing is, what you are serving to the browser is always a View-Model. If you serve a Model instead you are just "silently casting" it onto a View-Model because at that moment they are identical. Sooner or later the need will arise to imbue this with some presentation specific information attached on the fly to data from your data layer Model.
There's a huge difference in the level of presentation specific logic you include, though. E.g. between including the publication date of something vs. including the publication date in 4 different ways because it needs to be presented in 4 different ways in different formats and templates. One is about having to ensure a superset of the information required is available, and that superset can often involve very limited overhead (and can even end up being "cheaper" through being able to have fewer, more heavily accessed, cached items), while the other means you need to closely couple it to specific details of the presentation.
> I no longer try to use XSLT either, but I think web components are completely orthogonal tool. Your XSLT could still generate web components if you like. You could even "hydrate" generated HTML with React for interactivity.
Web components alone maybe, but combined with a higher level framework that also lets you do server side rendering, we're back to being able to serve up a single format and choose to transform it either on the server side or client side as needed, without the pain of XSL
> Your XSLT could still generate web components if you like.
Yes, but I wouldn't gain anything at all from that other than the pain of dealing with XSLT.
> What XML+XSLT was solving was basically skipped entirely by programmers.
Because XML+XSLT wasn't solving it in a way that was good enough even for people like me who badly wanted to like it. We got it to work, but neve
I think JSON is generally better than XML (although XML is better for some things, mostly it isn't), but JSON is not so good either; I think DER format is much better.
The only reason AI agents don't care about XML is because the developers decided, yet again, to attempt to recreate the benefits of REST on top of JSON.
That's been tried multiple times over the last two decades and it just ends up with a patchwork of conventions and rules defining how to jam a square peg into a round hole.
Many years ago I tried very hard to go all-in on XML. I loved the idea of serving XML files that contain the data and an XSLT file that defined the HTML templates that would be applied to that XML structure. I still love that idea. But the actual lived experience of developing that way was a nightmare and I gave up.
"Developers keep making this bad choice over and over" is a statement worthy of deeper examination. Why? There's usually a valid reason for it. In this instance JSON + JS framework of the month is simply much easier to work with.
Most of the issues of using client-side XSLT is that browsers haven't updated their implementations since v1 nor their tooling to improve debugging. Both of these issues are resolved improving the implementations and tooling, as pointed out by several commenters on the GH issue.
That kind of demonstrates why XSLT is a bad idea as well though. JSON has its corner cases, but mostly the standard is done. If you want to manipulate it, you write code to do so.
JSON correlates to XML rather than XSLT. As far as I'm aware, XML as a standard is already done as well.
XSLT is more related to frontend frameworks like react. Where XML and JSON are ways of representing state, XSLT and react (or similar) are ways of defining how that state is converted to HTML meant for human consumption.
"Choice" is a big word here. It would imply "we've weighted the alternatives, the pros and cons, we've tested and measured different strategies and implementations and we came out with this conclusion: [...]." You know, like science and engineering.
While oftentimes what happens is is "oh, this thing seems to be working. And it looks easy. Great! Moving on.."
XSLT is much nicer to use if you just create a very simple templating language that compiles to XSLT. Subset of XLST already has a structure of typical templating language. It can even be done with regexps.
Then simplicity becomes a feature. You can write your page in pretty much pure HTML, or even pure HTML if you use comments or custom tags for block markers. Each template is simple and straightforward to write and read.
People have been building things differently for the last 10 years, using json/grpc/graphql (that's why replacing complex formats like xml/wsdl/soap with just JSON is a bad idea), so why train(spend money) AI for legacy tech?
It depends on what you need. For use cases like "export data from HubSpot, transform it (join by id, normalize), and load it into Google Spreadsheets," it works great. I've tested it for marketing automation, but it requires skill to configure properly.
There are bunch of 'free image finders', some of them may mix commercial stocks into results - why others couldn't be more fair and take one time payment?