This is patently, obviously _wrong_ for anyone who has tried learning any artistic skill in their life. Sorry to be this straightforward, but it gets on my nerves every time I read it.
If you tried learning, let's say, the chiaroscuro technique from Caravaggio you'd be analyzing the way the painter simulated volumetric space by using white and dark tones in place of natural lighting and shadows. You wouldn't even think of splitting the whole painting into puzzle size pieces while checking how many how those look similar when put close one another.
Given somewhat decent painting skills, you'd be able to steadily apply this technique for the rest of your life just by looking at a very small sample of Caravaggio's corpus.
On the other hand if you tried removing even just a single work from the original Stable Diffusion data set you used to generate your painting, it would be absolutely impossible to recreate a similar enough picture even by starting from the same prompt and seed values.
Given how smart some of the people working on this are, I'm starting to believe they're intentionally playing dumb to make sure nobody is going ask them to prove this during a copyright infringement case.
>This is patently, obviously _wrong_ for anyone who has tried learning any artistic skill in their life. Sorry to be this straightforward, but it gets on my nerves every time I read it.
Both my parents (though retired now) were commercial artists. I was trying to be an artist at one point in my life before moving in Engineering and Science. All my parents friends are artists so I grew up around artists.
Ask any artist here who is using illustrator, Photoshop, Krita etc. How often do they google image search for textures, or reference images that gets incorporated into their artwork? The final artwork is their own but it may incorporate many elements from others artwork.
>If you tried learning, let's say, the chiaroscuro technique from Caravaggio.. You wouldn't even think of splitting the whole painting into puzzle size pieces while checking how many how those look similar when put close one another.
Ever seen hyperealistic pointillism?
Who are you to be the arbitrator of how an artists creates their work? Have you ever gone to a modern art gallery and seen all the different methods people use to create artwork?
Art is boundless and unique to each who creates it.
If a Artist uses a tool to create art, everyone agrees that is art. It could be a paint brush, clay, software on a computer etc etc. If an artist uses AI as a tool to create art then suddenly it's not art.
Honestly it's not that much different than sampling in music. A few years ago Kayne West tried something similar when instead of paying Aphex Twin for the sample usage of "Avril 14th", he asked John Legend to play basically a carbon copy of the track on a piano[1]. Ultimately he had to pay for the track anyway.
Same here. Did you use my artworks while training your AI model? Cool, pay me licensing rights and residuals. No? Then remove them from your fancy art obfuscator filter.
But how is it different from human artists creating new artworks based on their studies of thousands of previous works, I wonder? No artists have paid royalties just because they went to an exhibition and had a glimpse of other painter’s work, no?
This is basically Art School 101, and the answer can be summed up as:
Another painter would go for contextual and topical inference, specifically trying to understand why some techniques and painting styles garner a specific response. The AI goes for topological inference - there are no other informations beyond the painting data itself. It observes that some artists keep reusing the same colors and proportions and builds over the data by using very complicated parametric analysis. It's still mechanical reproduction, just extremely more complicated than in the past.
And there's nothing intrinsically wrong with this, mind you - going back to my previous musical comparison, Daft Punk built a career with a lo-fi version of this idea, but they never tried to pretend that Harder Better Faster Stronger isn't anything but an extremely, extremely clever flip of Edwin Birdsong's "Cola Bottle Baby". And they paid both licensing and royalties for it.
Ha anyone been able to deploy a better behaving protocol over the same ASICs/SoCs in so many years? I mean, I'm pretty sure no hardware or software vendor is remotely happy with the current status quo.
Given the similarities between a Gamecube and a g4-era iBook (down to the ATI 9200-Like GPU) I'm surprised nobody ever tried modifying Dolphin to boot up Mac Os Classic.
There were also some more subtle behavior changes a few years ago in the way webkit handled unprefixed flex-direction centering properties (align-items, justify-content) together with min-width/min-height attributes, switching from behaving closer to IE11 in this regard to basically emulating Blink. AFAIK neither behavior was really against-spec.
Honestly I never understood the logic behind the "HTML spec has to be forgiving towards errors". Like, what's so special compared to any other programming language? Popularity? Excel Macro programming is probably an even more popular and widely used language by non-developers, and I've yet to see anyone arguing for more leniency while trying to put formulas inside a sheet.
And by the way - I'd argue that HTML and CSS are not more "forgiving" towards the user, "silently failing" would be a more appropriate definition. I'd rather have an error message saying there's an unclosed tag so the page couldn't be properly rendered rather than the browser trying to infer meaning from broken HTML and misapplying CSS, generating a dadaist poetry piece instead of a blog page.
Unlike typical programs, HTML is often assembled dynamically. This means pages can have broken HTML sometimes, depending on data and context that the developer may not have tested. XML should have been generated from a DOM or something that guarantees proper serialization, but markup-ignorant text-gluing tools are the norm, and they're not capable of ensuring the markup is 100% correct 100% of the time.
These bugs were the worst, because they happened to end users. Users couldn't do anything about unclosed tags or unescaped ampersands, not even notify the developer of the page that refused to display.
Back then HTTPS was rare, but young mobile ISPs loved "optimizing" middleboxes that were messing up the markup. Even if you generated a flawless markup, your pages still wouldn't work for some users. Users were told that your page is bad, and it's your fault, and couldn't contact you about it. ISPs didn't care, because hardly anybody actually used the strict XHTML parsing mode (it made pages inaccessible to IE that had 80% market share). Most "XHTML" pages worked fine thanks to being parsed as regular HTML with invalid extra slashes.
I'm mostly a designer and I've been trough the same experience trying to learn the ropes of modern ES6 javascript development and this matches my experience fully.
On the same note, it took me days to wrap my head around:
- The small differences managing modules ( even simple stuff like inclusion paths using "~" vs "node_modules" vs ../node_modules" vs...) between build environments (Webpack vs Parcel vs ...) or even between different configurations for stuff like webpack and babel.
- The module format used by different libraries. Some use only the es6 module format, some use the CommonJS standard, some use both, plus a few oddball packages that are still released as global JS libraries.
- Specifically on the last one, I've yet to find a way that let you use global packages in webpack that works all the time. Sometimes you have to declare them as globals (jquery), a few other times you have to just include the js file, and in a few cases (p5.js) nothing seems to work properly. Parcel seems to require some extra plugins for this that I've never been able to configure properly.
- I've tried doing some REST requests using fetch and no matter what, It always seems that there's something completely wrong with CORS and the Access-Control-Allow-Origin parameter. I've tried using both a local dev environment and a remote host, with docker and bare metal, using nginx and apache using the required headers, with SSL certs and without. Obviously I'm doing something wrong, but it never feels like there a consistent logic in what should be the right configuration to avoid such issues. For every guide that suggests you to do X there's another that says you should be doing the opposite of X.
I've worked mainly on Wordpress themes and plugins for the last 7-8 years, and I've yet to see anything resembling common best practices regarding testing, CI/CD or even deployment coming from Automattic, especially regarding themes. That would be very nice to have.
Most of the issues regarding the instability and lack of security of the platform come straight from the mantra "upload the file using ftp and the edit them straight on production server" advertised by Automattic. This is not a serious answer from a platform that allegedly controls roughly 30-40% of the websites around.
What I'm afraid is that most of those choices (not moving to current development and deployment standards, doing the bare minimum in terms of upgrades to core functionalities of the platform, forcefully intertwining data, content and appearance of the website in the same SQL tables) are well intentional, and Automattic is "pulling an Internet Explorer 6", making sure that most users are locked-in on their platform by staying very away from any modern tech that could somehow make easier for developers to move to a different platform.
It's very, very complicated to move anything written on Wordpress on a different platform without losing some content along the way .
When your website is actually just a huge ball of: pictures loosely related to your content by their dynamically generated file name, textual content placed inside layout-defining shortcodes all stored together inside a single SQL query and dinamically rewritten by some inline php written in a single functions.php file, forget about moving anything to Squarespace, Ghost, Hugo or whatever without having to actually rewrite by hand all of your posts.
Oh, and also - I'd love to see some actual moderation in the Plugin marketplace. It's become incredibly common to find a plugin being acquired by another developer and being completely replaced between versions with a completely different plugin having just a very passing resemblance with the original set of features but also integrating a slew of unrelated paid "marketing" services and subscriptions.
Just today I found out a "maintenance page" plugin has just morphed into a full visual builder who immediately started nagging about having a subscription for the inevitable "pro version".
Regarding the deployment and testing, this is a direction I hope to take the content on the Branch CI (https://www.branchci.com/) and WP Pusher (https://wppusher.com/) blogs in the coming months!
I'm fully aware I'm a rather mediocre frontend developer so my opinion on the matter has very little value, but on the other hand I have this feeling that HTML, CSS and JS on the hindsight were probably some of the least suitable technological choices for what the current usage of the web actually needs to achieve this objective.
The triad of HTML, CSS, and JavaScript are actually fantastic for the web if used intelligently. Unfortunately the "used intelligently" part is skipped way more often than it should be.
Having HTML be the first resource at a URL is awesome. HTML can have lots of "invisible" metadata for a variety of user agents. Not every user agent is a graphical browser wanting to run arbitrary code. The microbrowsers that generate previews when shared by messaging apps have different needs than a graphical desktop browser which has different needs than a screen reader. The same HTML can serve all the user agents "for free".
Sticking styling and interactivity into separate resources let's the user agent figure out what it needs to load. A screen reader likely doesn't care about images, a phone might load a custom CSS based on a media tag, and a microbrowser wants the OpenGraph data in the header.
HTML also makes it trivial to point readers to different sections of itself so a user agent can jump to a relevant part of the page. So you get hyperlinks between documents and even specific sections of documents.
HTML is not perfect but it's great for documents. Even full JavaScript apps should at least provide a skeleton of HTML saying they need JavaScript to run and give non-graphical user agents pointers to appropriate resources. Much of the content of the web would work just fine as documents and does not need to be constructed on the client with megabytes of JavaScript and hundreds of calls to different resources.
There was once the idea, that the web could work like other objects in programming. You would download an object and then send it messages (message passing concept). I think this was imagined to be in some kind of Smalltalk image like environment. I think Alan Kay mentioned the idea in one of his talks, or I have read about it elsewhere related to Smalltalk.
Good god, enough with this "fascist and nazi seem to have lost all meaning" nonsense. Let's call it the "No true fascist fallacy". I could bring the corpse of Himmler here and some people would regurgitate the same "Bu-bu-but that's not actually fascist enough!". OP was basically quoting the definition of Ur-fascism by Umberto Eco. Is that historically accurate enough for you? Or should we check beforehand if whoever he was referring to has ever took part in the Salo republic, before we commit the grave sin of not being taxonomically accurate?
The reason people focus on definitions like “oh, it’s really about toxic masculinity!” is because admitting the truth would make them look bad:
Fascism is a collectivist authoritarian system with regulated commerce rather than direct state control, often co-occurring with systemic racism.
The reason people don’t want to be honest about the definition is that it’s the platform of modern Democrats, who are gaslighting by calling everyone else a “fascist”:
Democrats are collectivist authoritarian.
Democrats are pushing for regulated commerce.
Democrats are rebuilding systemic racism, from rationing healthcare [2] and government aid [1] based on race to attempting to repeal civil rights laws in WA [4] and CA [3].
Democrats took to the street in acts of arson, violence, and murder to terrorize the public ahead of an election — the modern Brownshirts. [5]
I couldn't care less what oblique definition of fascism came out from some american think-tank in the 80s, narrow enough to not anger any of their thatcherian or reaganite friends.
I'm italian, my grandfather was drafted in the balilla first at 14 and the fascist army later. And his stories of the time were all about the violence, the machismo, the open contempt for the gay, the jewish, any other minorities. That's fascism, no matter if it doesn't match your clinical idea of what fascism should or shouldn't be.
And yes, they were as silly and ridiculous as the tiki torches guys or the Jan 6 coup guys. Until they were fully in power. Then everybody stopped laughing, or wondering if they were really dangerous or not.
And to be quite honest with you, worry not - I think we'll find very, very soon how close those are compared to US democrats to actual fascists(tm).
I’m paraphrasing what the fascists said their goals were.
If you read about fascism, their proponents viewed it as “Marxism 2.0” — where they could leverage the socialist ideas of collectivist authoritarianism without the problems encountered by the original Marxist revolutionaries with total state control of commerce.
A unified populace where “everything in the State, nothing outside the State, nothing against the State.”
Well yes, the Goodwin point is now crossed very casually, just like people are using superlatives for the mundane things, such as "I ate the most amazing fries yesterday".
This makes for poor debates, where there is little nuance, fuzzy scales and hardly meaningful communication.
As I said in another post, I'm italian, and my grandparents had some direct experience on the matter. Their families were destroyed by nazists and fascists. My grandmother family was jewish, A have a few pictures of her relatives with a number tattoed on their arms. I never dared to ask where or how they got them.
No idea how fucked up my gramps were, but by their direct account, yes that "strong males" attitude we're talking about was quite a fascist trait.
If you tried learning, let's say, the chiaroscuro technique from Caravaggio you'd be analyzing the way the painter simulated volumetric space by using white and dark tones in place of natural lighting and shadows. You wouldn't even think of splitting the whole painting into puzzle size pieces while checking how many how those look similar when put close one another.
Given somewhat decent painting skills, you'd be able to steadily apply this technique for the rest of your life just by looking at a very small sample of Caravaggio's corpus.
On the other hand if you tried removing even just a single work from the original Stable Diffusion data set you used to generate your painting, it would be absolutely impossible to recreate a similar enough picture even by starting from the same prompt and seed values.
Given how smart some of the people working on this are, I'm starting to believe they're intentionally playing dumb to make sure nobody is going ask them to prove this during a copyright infringement case.