Brian Holmes via nettime-l on Fri, 10 Nov 2023 01:21:34 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: <nettime> It's Time to Fight for Open Source Again (fwd)


I am very glad that someone well informed and competent responded to this
manifesto. Anarcho-capitalism is real, predatory and dominant. Corporations
have the power to instrumentalize innovation. The open-source ideology
depended on massive adoption of hacker skillsets, and that never happened.
Without better government, open-source tech mainly comes down to the
freedom to see your inventions expropriated and used against you.

Under the influence of nettime I became a sys-admin continually involved in
open-source mapping tech. I don't regret it at all but it has made very
clear the gap between coders and most other people. If we had more support
from states and local collectivities, not just for tech but also education,
then all this could move forward. But it also needs something that can only
be provided by civil society: the ethical part, which must come before and
reach beyond the tech. If we are to survive anarcho-capitalism, that is.

Best to all, Brian

On Wed, Nov 8, 2023, 08:05 lynx--- via nettime-l <
nettime-l@lists.nettime.org> wrote:

> This manifesto in favor of open source AI contains some difficult
> statements and non-trivial axiomatic assumptions that I'd like to
> point out for discussion, taking the inability of civilization to
> govern itself into account.
>
> Daniel Jeffries <danieljeffries@substack.com> writes:
> > Open source AI is under siege.
> >
> > A loosely knit group of AI opponents want to crush it completely.
> > They want to make sure you never get to download a powerful model.  If
> they
> > win, you'll be forced to access models trapped behind a steel cage of
> > corporate firewalls, the models safely castrated so you can't do anything
> > they don't want you to do or think or say.
> >
> > Instead of the open web, where anyone can publish anything without
> > intermediaries, you'll have a filtered web where big models parrot
> official
> > talking points only and keep you from "misinformation," a vaguely defined
> > term that always means "whatever the folks in power don't like at the
> > moment."
>
> With the rising dominance of misinformation on the Internet thanks to
> suitable infrastructures and unsuitable regulation, here's an
> increasingly popular assumption that there is no such thing as
> scientific facts. Well, I'd like to challenge that and say that, despite
> some hindrances, the concept of 'scientific consensus' still exists and
> there still is, in the majority of situations, ways to differentiate
> facts from misinformation.
>
> And I'd like to emphasize how misinformation is indeed becoming a
> massive threat to the original intentions of the Internet, now that its
> main role is the undermining of the population's ability to form
> fact-based opinions, instead fostering radicalization of minds based on
> false assumptions and data. The Internet is undermining democracy and
> the open source model may have a role in that.
>
> Instead we are seeing a presumed conspiracy of "the folks in power"
> here. A conspiracy theory is legit only when the number of people
> involved is reasonably small. All the folks in power cannot be in a
> conspiracy, but they may have some interests in common. They may have a
> common interest in fighting misinformation, and we should be glad for as
> long as our leaderships are still majority-wise in favor of fact-based
> information, because the Internet is happily bringing people into power
> that do not share that value.
>
> > You see, you're too stupid to be trusted with powerful models.  You might
> > hurt yourself or others.  After all AI is just like a nuclear weapon they
> > tell us.  It's too dangerous to let into the hands of mere peasants like
> > you.
>
> My theory is that artificial intelligence is harmless if fed with only
> ethically legitimate data. It's the collection of private and/or
> democracy-destabilizing data which is threating what's left of
> democratic civilization, with or without AI applied to it. That's where
> regulation should be enacted, because the individuals who "grant" usage
> of their data are factually neither competent nor legitimized to
> relinquish personal data that in fact affects not only themselves but
> their entire social neighborhood and the social structures of democracy
> as a whole. GDPR has been misconceived (in best intentions as we all
> know) and is acting as a legalization of the threats to democratic
> society.
>
> > To understand why, you just need to understand a little about where open
> > source came from, where it is today and why it's the most important
> software
> > in the world by a massive margin.
>
> Just because the market power of open source was strong enough not to
> pay for proprietary unices (but still not enough to crush Windows and
> MacOS) and eliminate them doesn't imply that open source is per se
> ethically better. Windows and MacOS engulfed some of the open-source
> into their proprietary systems and Google even developed a worldwide
> surveillance operating system on the shoulders of thousands of
> developers who believed the naive idea of open source being
> automatically for the good even if abandoned into the hands of big
> proprietary players who can take 99% of it, put a 1%-sized proprietary
> shell around it and effectively cut out humanity from the financial and
> ethical surplus they created by writing open source technology in the
> first place. If Android is a threat to humankind today, was the open
> source ideology a good idea? How much longer until policymakers
> understand that proprietary operating systems are a threat to human
> civilization and its ability to address more important issues like
> planetary survival?
>
> > Open source AI will make the world safer and it will be one of the
> > dominant forces for good in the world.
>
> That's the same style of simplification that led everyone in 1994 to
> believe that Internet was going to inevitably improve democracy on a
> worldwide scale. Open source hasn't made the world safer, it is thus
> naive and unlikely that open source AI will automatically do so. It may
> simply be the case, that it won't generate much ulterior damage if my
> theory about data holds true. IMHO the lack of regulation of proprietary
> systems is a bigger issue than AI - and it's an elephant that has been
> standing in the room for decades and hardly anyone is seeing it.
>
> > Frankly, it's bizarre that it's under attack.  I though the battle for
> open
> > source was over forever.  How could it not be?  Open source won.  It won
> > massively.
>
> It's the same with capitalism. It won over socialist models because it
> was economically more powerful to ruthlessly dismantle Earth's resources
> and monetize on the inability of customers to make rational choices
> about how they spend their money. And capitalism works anarchically, it
> does not need any central coordination - that's how it won over
> socialism, but how stupid were we in 1989 to believe that it won because
> it was better? It is more efficient, but what's better on an ethical
> scale is a whole different question.
>
> It's like saying Caesar, Napoleon and Bismarck were good folks because
> they won some wars, ignoring that they shouldn't have started the war
> in the first place.
>
> > Open source is the basis for 90% of the world's software today!  Read
> that
> > again slowly.  90% of the world's software.
>
> Because in our capitalist architecture of society you can't afford to
> do things differently. Is that an achievement? Is that good? That's
> an axiomatic assumption here.
>
> > Linux runs every major cloud, almost every supercomputer on the planet,
> all
>
> I see all the promoters of BSD deeply offended by you not mentioning
> the large market share it has due to MacOS, haha.
>
> > Open source levels the playing field in life.  It gives everyone the same
> > building blocks.  You get to use the same software as mega-corporations
> with
> > 10s of billions of dollars in revenue.  You get to use the same software
> as
> > super powerful governments around the world.  So do charities, small
> > businesses just getting started, universities, grade schools, hobbyists,
> > tinkerers and more.
>
> That's not always true. Half of my devices disallow me to put my own
> free operating system onto them. I have been running Gentoo-based
> laptops for over a decade. I installed Linux the first time in 1992. And
> yet today I'm forced to use Android devices that thrive financially on
> observing my digital habits, just because I'm socially forced to be
> active on Instagram, which only works properly as an app. I could run an
> Android emulator in a Linux, but for some obscure possibly political
> reasons I still see that as a not very commonplace solution - also it
> would not enable me to post stories on location from a mobile phone like
> my social neighborhood does, allowing Meta and Alphabet to gather
> everybody's whereabouts as they do. And solving the issue just for
> myself doesn't address the societal problem of it.
>
> > Go ahead and grab Linux for whatever project you dream up. Grab one of
> the
> > 10s of millions of other open source projects for everything from
> running a
> > website, to training an AI model, to running a blog, or to power a Ham
> radio
> > You don't have to ask anyone's permission or pass a loyalty test.
>
> That's how the Chinese government can implement a system for keeping
> score of citizen's behaviors and despite it most likely (facts welcome)
> being built on top of plenty of open source technology the citizen are
> under totalitarian control and cannot escape from it. No reverse
> engineering. No resetting their score by running their own instance. No
> forking of the source code.
>
> > With that kind of reach and usefulness I never saw it as even remotely
> > possible that someone would see open source as a bad thing or something
> that
> > must be stopped ever again.
>
> Oh wow, the road to hell being paved by good intentions and convictions
> here. Not even remotely being able to see the dark side of things. So
> easily adamant to praise the simple solutions.
>
> > Even Microsoft loves Linux now.
>
> And Apple and Google and everyone who makes lots of money using code
> that others spent years on.
>
> > But I was wrong.  Here we are again.  The battle is not over.  It's
> starting
> > anew.
>
> Open source never won, because it can be integrated and proprietarized.
> Back in the late '80s and early '90s there were plenty of free software
> and source code releases that came with licenses that were forbidding
> commercial or military abuse. The open source ideology has established
> the term "non-free" for such ethical software. Shame on open source!
> What a miserable brain-wash!
>
> > They didn't listen to their own experts because too often governments
> > fall prey to the fear mongering and doomsday scenarios and a hatred of
> Big
> > Tech companies.
>
> It's true that policymakers have utterly failed to regulate everything
> digital from the beginning. The freedom of technology is not the freedom
> of people affected or using it. By leaving technology unregulated it has
> been allowed to evolve anarcho-capitalistically rather than in the
> interest of humankind.
>
> > This is a disaster for society, for innovation and for transparency.
> > Openness is the foundation of modern democratic societies.  Openness has
> led
> > to some of the most powerful breakthroughs in history.
>
> Like the careless exhaustion of CO2 into the atmosphere and the
> polluting of seas with plastic. It's the same disregard for the
> long-term effects of allowing everyone to operate in anarchic-capitalist
> openness. There are never simple solutions to complex problems. Calling
> "openness" a panacea in every context is ideological, especially if it
> means that everybody should be given the tools to harm the planet or
> society. Maybe AI actually won't, but "open source" by its current legal
> definition certainly does.
>
> > When it comes to software, when something is know to a wide group of
> people
> > it means anyone can try their hand at fixing problems with those systems.
>
> Not in this case, because Google only lets you tinker with your Android
> emulator (or expensive special hardware that only has a reduced number
> of proprietary blobs in it), not with the billions of smartphones out
> there.
>
> > You never know where innovation is going to come from but you do know
> that
> > an open and level playing field is the best way to maximize the
> possibility
> > that innovation will happen.
>
> That I would agree with, but we are decades away from the last time we
> had a level play-field and at least a century away from the times when
> the biggest innovations weren't endangering sustainability of living
> species on the planet.
>
> > They’re well intentioned extremists.
>
> Yes we are, and that includes you. Now we could start reasoning
> collectively, but even then we are subject to a system of governance
> which doesn't foster collective rationality so we are heading for doom
> anyhow, with or without AI.
>
> > Less is more when it comes to crafting tight, clear legislation that
> works.
>
> Another simplification that doesn't scale up to the complexity of
> reality. "Less is more" is a boomer phrase from the '80s that never
> proved true, IMHO. Simplifying complexity with brute force usually leads
> to even more problems.
>
> "Simple" legislation like the radical approach I am suggesting could
> work if there is the comprehension of urgency of taking such a radical
> approach rather than trying to make every imaginable lobbyist happy. It
> would take the ability to explain to all the stakeholders that they are
> actually better off if the market is a level and fair playfield like
> back in 1995, unless of course you are one of the few monopolists.
>
> > What is the actual probability that AI will kill us all?  Trick
> question!
> > Any answer someone gives is just totally made up bullshit based on
> > absolutely nothing more than someone's feelings.
>
> Indeed it is ridiculous to attribute such power to AI given that we have
> already enacted all the anarcho-capitalism and lack of regulation on a
> worldwide scale that is leading us to a 95% probable doom. And we're
> taking all of fauna and flora with us - the result will be an
> inhabitable planet for nearly all species.
>
> The inability of public attention to focus and priotize the issues that
> are indeed leading to the demise of human society is itself a big
> problem. The chances of AI being just another technological hop speeding
> us up on our insane trajectory may be of secondary relevance. At least I
> value it less bad than blockchain currencies.
>
> > If you believe in an AI apocalypse that's fine but you don't get to make
> the
> > rules for the rest of us.
>
> That's ideological on both sides of the fence. If they are convinced of
> the AI apocalypse of course they need to fight against you being allowed
> to use it and if you are convinced that open source AI is the answer it
> could be considered disrespectful if you get to make the rules against
> them just because of that. In theory we should have fact-based policy-
> making, checking the argumentations of all extremists on all sides.
> Alas, in practice...
>
> > That's life.  It's a risk at every step and the beauty is in the risk.
> > Nothing worth having comes without risk.
>
> That's how we undertook the path of industrialization that is now
> threatening our existence on the planet. Some doomsayers back in the
> 1910s were worried about the long-term consequences of ejecting CO2
> into the atmosphere. What if their opinion had seriously influenced
> policymaking at the time? Industrialization could have taken place
> at a path of sustainability, allowing a wealthier life for every
> being on the planet.
>
> We could have skipped globalization entirely, because it never made
> sense to ship onions from China to Europe while the residues of burnt
> heavy fuel oil is ejected into the atmosphere.
>
> Throwing ourselves into risks just because the short-term advantages
> are tempting, now that's short-sighted! Especially in a society that
> is notoriously unlikely to agree on correcting the trajectory once the
> gun is shot.
>
> > The arguments for the AI apocalypse fall squarely into the Münchhausen
> > trilemma.
>
> Whenever people are convinced of things based on unproven or
> fallacious axioms... being against AI is just as questionable as
> being in undisputed favor of "open source".
>
> > To fix problems we have to put AI into the real world.  That's how every
> > technology works, throughout all time.
>
> And then it becomes impossible to take it back on a worldwide scale
> because if you regulate it in the EU or the US the remaining countries
> will want to cash in on the capitalist advantage of not being affected
> by such regulation. On a worldwide scale we are living in an anarcho-
> capitalist system. Anarchic structures like the UN can regulate them-
> selves sometimes, but only when all national leaderships agree. If it
> isn't happening for the noble cause of saving our existence on the
> planet, why should it work for much else? Ah right, we did save the
> ozone layer. At least one thing where short-term financial and
> political profit weren't luring enough.
>
> It's amazing how wise we can be individually, yet utterly stupid on a
> collective level.
>
> > In this story in Fortune [
> https://substack.com/redirect/4f85e73c-54dc-4a62-bb20-9127dfa7cf52?j=eyJ1IjoiN2N2MHMifQ.YtcUop6k1Xss2moIExwcy0mVj0mtto86Q9WaZCPRkdc
> > ] , OpenAI said exactly that.  “[Their] biggest fear was that people
> would
> > use GPT-3 to generate political disinformation. But that fear proved
> > unfounded; instead, [their CTO, Mira Murati] says, the most prevalent
> > malicious use was people churning out advertising spam."
>
> When a new technology comes out, first it is the cool people that start
> using it for disruptive yet ethically valid purposes. Then come the ones
> that use it for porn. Next come those that use it for commercial gain,
> which frequently boils down to spam. Only then come the ultimate smart
> and macchiavellic ones, that figure out how to use it for political
> manipulation. Alright, that's a theory of mine. But wouldn't you say it
> applies to at least all social media innovations? Even makes sense for
> e-mail and USEnet.
>
> > You're probably reading this on something that's built on open source,
> > whether that's your phone or your laptop.  It doesn't matter if your
> phone
> > is an Android phone or an iPhone, there is some open source in there.
> The
> > kernel of both MacOS and iOS are open source and Android has a Linux
> kernel
> > in it. Yes they are wrapped in proprietary code but that's the beauty of
> > open source.  If you're a big, rich tech company, you can build flashy
> > proprietary things on top of it.
>
> The "beauty of open source" ? OMG, that's such a boomer thing to say.
> Will it take a generational change for people to realize how wrong open
> source has been by enabling proprietary technologies and keeping them in
> a position of dominance? I think we made a mistake by fighting against
> software patents, because I am pretty sure we would be living in a
> better world if proprietary technologies were entirely illegal and all
> the earnings for creative people came from patents on brilliant but free
> algorithms that have been societally proven worthy of being patented
> in a suitable way, not in an inhibitory way, but distributing profit to
> all the ones that actually contributed to it. Our smartphones would be
> truly free and everyone who contributed to the software stack running in
> them would be earning a fair share for their pension. Why does it take
> me to say such a thing and envision a better world of truly free
> software? The Internet should be full of people seeing things this way.
>
> > It's time to fight for open source again.
>
> It would be a better world if proprietary technology and the open source
> ideology that empowers it were outlawed. And then there are
> possibilities to allow proprietary software to run in sandboxes like
> Android already provides, but it is a major harm for human society that
> the foundational operating systems aren't actually free - we are just
> brainwashed to think that it is cool that they are based on software
> that once was open.
>
> Let's fight for free software that no longer subscribes to the open
> source ideology. Let's regulate proprietary software. No piece of
> hardware or software should be allowed to be proprietary if it is
> involved in handling any data of essential importance to our society or
> of constitutional important to our democracies. If it is treating a
> private photograph or a personal message, nothing involved in doing so
> must be proprietary. And in no way is it enough, that it once was "open
> source".
>
> Software patents offer enough ways to monetize software, we don't need
> to allow living our lives off of food whose ingredients we are not
> allowed to inspect.
>
> > It's time to fight for open societies.  It's time to fight for open
> software
> > and the free exchange of ideas.
>
> It's time to fight for open software that cannot be used against us.
>
> > Trust in open.  Never trust in closed. Nothing good ever comes from
> > extremist polices throughout all history. When we let extremists come to
> > power we get disaster.
>
> Oh really? When safety belts in cars were introduced, most car drivers
> yelled at the policymakers, calling them extremists. Take murder as an
> example: is it open or closed to disallow murder by legislation? I
> would say it is "closed", and yet it is foundational for civil society.
> It allows me to walk in parks at night safely, at least in my country
> with reasonable gun control.
>
> > AI fear is just a con job, designed to fool you.
>
> Another improbable conspiracy assumed here?
> --
> # distributed via <nettime>: no commercial use without permission
> # <nettime> is a moderated mailing list for net criticism,
> # collaborative text filtering and cultural politics of the nets
> # more info: https://www.nettime.org
> # contact: nettime-l-owner@lists.nettime.org
>
-- 
# distributed via <nettime>: no commercial use without permission
# <nettime> is a moderated mailing list for net criticism,
# collaborative text filtering and cultural politics of the nets
# more info: https://www.nettime.org
# contact: nettime-l-owner@lists.nettime.org