brian carroll on Mon, 3 Sep 2012 10:14:58 +0200 (CEST)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> subjective math



(Hello Nettime. This essay is a footnote within a larger essay proposing
a future computing infrastructure based on 3-value logic versus binary,
as it is today. The result would be moving from an NAS-based model of
networking and cloud computing toward an AI-based 'data furnace' model.

The issue centers on the question of language as a barrier & limit to
such development due to the lack of logical accounting of truth within
communication, where a corresponding condition of babel is ideal for
sustaining computers and search engines of today as its rationale,
perhpas equivalent to global networked player pianos which requires
music roll scores to continue being made to maintain its operations,
even if people  no longer play the instrument or are in control of it.
The blackbox condition of technology today, reliant upon this language.

In essence it is the nature of the Sokal Hoax yet extended into the
Enron techniques of unaccountability. The corruption of language and
mathematics becomes a technology, perhaps equivalent to hacking and
cracking the economy, political system, and societal dynamics via such
techniques which functionally govern it, yet displace people in the
process who are no longer constituents within the communications.)


-- Subjective Math ---

Perhaps better said ungrounded mathematics could lead to calculations
that are forced into partial or particular frameworks due to distortions
of logic. For instance, if relying upon numeric evaluation of a
situation as it is modeled and mediated in language, by default this
processing relies upon calculations correlated to the signs represented,
categorized and accounted for, presuming their accuracy by default of
their existing as language – as if truth itself, correlated to number.
This is a fallacy. Language does not have this capacity today, it is a
false accounting based on a false perspective, an issue of presumption.
At most it is an approximation or generalization, supporting and
requiring structural error for calculation. In this sense perhaps it is
not ‘pure’ and mediates the issues of language, inherently, if
interacting with it.

Why this is important is that if someone creates an equation if not
algorithm, say ‘XYZ’, and claims it can achieve ‘A’ as a result of
various mathematical (logical) operators in its processing of some
situation, that the accuracy of this is contingent upon truth that is
involved in the different variables, A, X, Y, Z.

It could be presumed by default that A and XYZ are themselves true
because they are ‘signs’ and if the pattern is matched between what is
said to be XYZ, for instance a category, that it equates with the
particular variable representing it. In some instances, due to limiting
considerations, representations of these categories could ‘match’ the
variables in the equation yet perhaps in only partial ways. So it may
not be purely or entirely true, only partially so. For instance X and
partial-X, or ~X. It also may involve an issue of inherent
approximation, whether rounding numbers or ideas into a context.
For example the number 10.2 could turn into X = 10.0 instead,
or referencing The Crossroads could involve X representing
nearer proximity to it.

In this way an inaccurately accounted for input can subjectivize the
equation from XYZ to ~X~Y~Z and yet it may still be faithfully believed
that the description of events is by default coherent and removed of
error by its use. The claim then that A may be derived from XYZ could
more actually involve the result of processing ~X~Y~Z in its place.
Even still, the result A may be believed ‘pure’ because of the role
of language and a certain uncritical evaluation of truth within
these evaluations.

If the result is generated by partial truth, and is only somewhat true,
then the answer or result would also be influenced and effected by this
tentative condition; therefore ‘A’ also tends towards ~A by default,
that is: partial-A. The approximate categorization thus turns XYZ => A
into a scenario of ~X~Y~Z => ~A instead.

This is no great shakes. The problem with such a basic condition of
ungrounded truth in relation to observation is that its mediation by
existing language defaults to this scenario. It is where things begin
because communication is short-circuiting. Truth is not separated from
the structures of falsity that are used to sustain it and thus exists in
a state of impurity, day to day, in the world of existence. Far from
pristine or easy to access at the level of fidelity of equations. It is
just not that easy.

The problem is that binary thinking can tend to believe it is, which
allows simpler thinking to believe itself knowledgeable about all things
simply because it can be mediated within existing language in terms of
the exchange and communication of signs. XYZ and A instead of the
truth they seek to represent and model.

If you add the critical detail of relativism into this ideological
cauldron, which occurs when falsity exists in such computations and
processing, is that the error-rate can be very high to whatever truth
may be partially sustained within a given category.

For instance, A could be assumed to be ‘all true’ if as an answer it is
removed of any and all falsity, so that only truth remains. Whereas if
it was only partially true or ~A, then a lesser accuracy exists by which
to correlate the variable with actual reality that it seeks to represent
and model, via ‘math’ not language and yet it is behaving like
subjective language exactly in this way.

In some sense, in binary terms, if A is not entirely true then it is
‘not-A’, that is it would be considered false. So if it is 99% true, it
is still false, because A does not equal ~A. Another way to say it is
that A = A would be assumably objective, yet if A = ~A it would be
subjective, not wholly true as a representation of one condition by
another, and can invalidate the claim. If this difference is ignored,
the result is Truth = Falsity.

What happens through too simple binary processing and ideological
frameworks is that the truth is whittled away via such processing, until
only a minor or fractional truth exists and subsists within the variable
or sign which seeks and purports to represent it, though it is a
misrepresentation likewise.

In other words, the result A then defaults to ~A, which instead of
hovering at 99% instead moves ever closer to an accuracy of 1% if in an
uncritical and ungrounded evaluation, exchange, circulation where the
belief that A = ~A becomes standardized, viewed as A = A though more
actually functions as ~A = ~A, uncorrected and unaccounted for.

It is not an issue of trust to determine this, it is the impact of
relativism calculating upon itself via its biasing, yet not correcting
or accounting for its errors that instead become foundational for the
views.

What this is to say is that a minor truth, the kernel perhaps, of a
category or sign meant to represent some ‘truth’ then can only exist in
a partial truth without its empirical grounding, removed of error. And
this truth must be sustained to some minimal degree to allow the
correspondence between patterns to be believable, that ‘this’ equals
‘that’ given the evidence.

The original presumption that A = A can be viewed as:  [truth] = [truth]

Yet if the condition is actually:  [truth] = [some truth]

What it amounts to is:  [truth] = [some truth] + [some falsity]

Undifferentiated, it becomes:  [truth] = [some truth & falsity ]

Yet it is not simply A = ~A, because  [truth] ≠ [partial truth]

Instead it is something more: A = ~A + (something else) ...

if not:  A = ~A + not-A (falsity)

What occurs is that A is not able to be represented accurately by ~A
or not-A, yet they substitute or stand-in for it as placeholders in the
equation or computation by default of not logically accounting for
these dynamics.

In this way, A is removed as an actual variable, perhaps only
becoming virtual or ungrounded as a representation:

~A = ~A + not-A

or:

~A = ~A + falsity

Which then automatically translates into:

~A + falsity = ~A + falsity

This is tautological. The purity of ‘A’ then becomes equated with

~A + falsity

And this can include contradictions, such that beliefs of

‘A’ = partial-A + not-A.

The reality of ‘economy’ can be negated and a partial economy can be
substituted for it, all within a realm of calculation and numbers, where
the words are matched up with numbers seeking to represent the
concepts outside logical accounting. Thus a partial view can operate
an ungrounded virtual economy that functions ‘against economy’, etc.

To put it in more direct terms, going back to the version evaluating
truth as the variable:

[truth] = [some truth] + [some falsity]

Yet this purity of truth is not sustained, ‘some truth’ cannot represent
‘all truth’, so it must be acknowledged that at most only ‘some truth’
will be accessible within the equation, turning it from A=~A into ~A=~A,
though with additional errors/falsity:

[some truth] = [some truth+falsity]

Thus ~A equals ~A though with errors or falsity that could even be not-A
or contradictions to the original consideration, which not only exists
virtually, in that ‘A’ is not actually be represented by these
variables, it is an ungrounded assertation or belief.

Accounting for this, then, whatever ‘truth’ exists in this evaluation,
it is an an impure context:

[some truth+falsity] = [some truth+falsity]

Another variable such as ‘virtual-truth’ would be needed address this
ungrounded condition, whereby actual truth is mediated and represented
by partial-truth indistinguished from accompanying falsity…

‘virtual truth’ = [some truth+falsity]

wherein:

‘actual truth’ = [some truth+falsity] is FALSE.

It is not for sake of abstraction this is demonstrated. Language
functions under the presumption that ‘actual truth’ is accessible
without accounting for truth within logical analyses and verification of
claims made, whereby reasoning is detached from truth, its accounting
as truth, and instead is ‘partial truth’ that is not separated out from
accompanying falsity which can be used to sustain it. In this way, there
is a ‘virtual truth’ that functions friction-free within language and
communications and exchange, as if ‘reasoning’ yet is separated from
actual truth, its empirical grounding, the correction and removal of
errors – and it may not even be possible in the existing form of
language to account for this, thus it defines the trap of language or
basis for ‘babble’.

What the logical situation requires is that A = A, or that [truth] = [truth].

Instead what is allowed is A = %truth + %falsity, which becomes a
presumption of ‘virtual truth’ as if it is 100% true.

In this way you could have 1% truth and 99% falsity which is
ideologically believed to be 100% true by binary reasoning.

i.e. [truth] = [1% truth] + [99% falsity]

It is a false-perspective, unreal, only virtual, not connected to the
world accurately. And it is also ‘not true’, it can be falsified very
easily such that 100% does not equal 1%, if acknowledging the
99% errors involved in the claim of pure reasoning.

The way logical accounting works is if removing the falsity, then the
truth can be recovered:

if: [virtual truth] = [1% truth] + [99% falsity]

Then in removing falsity from the equation, ‘truth’ could be
recovered...

[virtual truth] = [1% truth] minus [falsity]

then:

actual truth = 1% truth

or:

truth = 100% true

This process of refinement of ideas is not occurring and largely cannot
occur given the existing approach. These truths may be suspended within
language, yet they remain in a context of falsity by default, so while
they may be ‘known’, they can neither effectively be sustained nor can
they be the entire basis for communicating, only in ‘truth’, versus
using false constructs as carriers.  A future language could instead
model and reference only truth and then hypotheses, getting rid of all
the excess and extraneous data for a more pure and by default grounded
communication.

***

In other words, a version of ‘truth’ is normalized that contains ‘some
truth+some falsity’ yet it is maintained as if ‘true’ by default of its
being mediated as truth or believed true.

If errors are allowed and actually sustain observations, themselves
being the basis for ‘objectivity’ and the belief in operating and
processing information in such a realm, over time there can be contagion
or some impurity that is never separated and is retained within the
observation that exists outside of its logical accounting of its truth
in ones and zeroes in a shared empirical modeling of ideas.

Lacking that purity, the impure calculations are consequence of impure
input and assumptions of how signs exist in relation to truth. They are
not truth themselves, only a container that can be improved upon or
deteriorated via how the information is mediated. To assume truth exists
in the alphabet is potentially to confuse truth with language itself,
its signage as if what is true, versus standing-in for it. If it is
grounded the sign may actually be true, yet if only partially grounded
it is not wholly true yet if believed so can begin to foster and build
and rely upon false-perspectives as a basis for exchange,
malfunctioning.

In this way, someone could wrongly believe that simply stating something
is true, makes it true. This is particularly relevant to starkly
ideological binary viewpoints and also relativism which is ungrounded
and gains its truth by ignoring other truth.

Why this is important is because language proceeding in this way as
communication tends towards falsity by default of its inaccuracies, the
noise introduced and sustained within exchange, that then tends towards
the devolution of truth within language because of uncorrected errors,
versus a purification of truth by communicated language. So for some the
sign (A) is an answer, whereas for others it remains contingent, a
question, depending upon the veracity of the claim, its truth.

[each] [sign] [becomes] [a] [contingency] [given] [its] [logical]
[accounting]

For instance, every letter potentially in superposition for meaning, the
autocorrect fixing what may become flaws in the communication and yet if
something goes a rye or its missng, meaning begins to shift, and perhaps
an unseen error or mistake or slip could take leave from a condition of
knowing, into a more ambiguous realm, a lost forest, hearing something
else. Nothing new hear.

Just for sake of clarification:

For instance, every letter potentially in superposition for meaning, the
autocorrect fixing what may become flaws in the communication and yet if
something goes [awry|a rye] or [its|it’s] [missng|missing], meaning
begins to shift, and perhaps an unseen error or mistake or slip could
take leave from a condition of knowing, into a more ambiguous realm, a
lost forest, hearing something else. Nothing new [hear|here].

So this is linguistics of the last many decades, what is possible with
post-structuralism, it would seem. It can get into a calculus where a
typo shifts the register and the phase of a paragraph could transmute
into paragraph 2, as a result of this. If it leads towards insight and
is based within truth, it is can be an advanced way of communicating,
beyond or through the broken structures, rewiring them to regain the
potential beyond the existing limits and boundaries. Yet if instead of
insight it was simply error, like an error in thinking then the
condition could be evaluated beyond intent alone.

In this way, letters, words, sentences, paragraphs, chapters can exist
in a condition of suspended judgment, whereupon their evaluation each
could shift one way or another, falling into one kind of circuitry or
another based on how it is grounded by the observer, perceiver,
interacter with the communication. Any given structure could exist in a
type of superposition, for instance with typos, errors in reasoning,
life errors, as this can influence communication, how something is
written and read, and how it is related to and through by others,
in shared or unshared frameworks, etc. So the exchange is
N-dimensionally complex.

A more realistic evaluation would be a string of such contingencies, or
states of superposition, that when taken together, like with XYZ, then
could fall into some configuration given how they exist and within what
perspective the interaction occurs. This is seemingly the question of
perspective. For example, if something were communicated, it could be
considered ‘true’ by default of its language, such that:

[truth] = [some observation communicated through language]

Whereas, the actual state of language exists as a realm of variables in
superposition, perhaps true, perhaps not.

unknown = [true|false][true|false][true|false][true|false] [true|false]


If everything that was communicated was true then ‘A’ could actually be
true:

true = [true] [true] [true] [true] [true]

Yet if for some reason something is false, it would not add up the same:

false = [true] [true] [true] [true] [false]

This is because it is not 100% true, yet if approximating it could still
be considered true…

true = [true] [true] [true] [true] [false]

So here is a condition where truth = 80% truth, i.e. 4 out of 5
variables are true.

What happens in ungrounded evaluations is that 80% truth substitutes
for ‘truth itself’, including the falsity or errors. And the more this
occurs, the more falsity becomes institutionalized in the conveyance
of ideas deemed to represent ‘truth’.

The ungrounded binary ideologue could equate ‘any truth’ as if
wholly true, such that:

true = [false] [false] [false] [false] [true]

This goes on all the time in terms of ungrounded reasoning of ideas.
It is the default. In this same way, the binary ideologue can exploit
‘logic’ to deny truth if there is any falsity within anothers viewpoint,
such that:

false = [true] [true] [true] [true] [false]

So it is a very powerful analytic and ‘debate’ technique to discredit
and devalue inconvenient facts and observations as they exist outside a
given ideological viewpoint. This is what allows for onesidedness in
exchange in binary terms. This condition can be exploited to only allow
for certain facts to be accounted for by a given observer, and in turn
reduces questions of ‘truth’ into an accounting of “facts” instead of
ideas. As if everything is in such a pure state of grounded calculation,
the conceit of higher reasoning without the problem of accounting for
actual truth, and instead only a corrupted self-serving version of it.

It would take a lot of description to get into a 3-value and N-value
explanation yet the essential difference is that the condition of
partiality can be effectively addressed and is not forced into an
either-or framework by default, forcing inaccurate if biased
approximation. Thus bias, distortion, and warping can be neutralized
and accounted for versus structurally relied upon.

For 3-value evaluation, a condition such as follows could be worked
through:

[unknown] = [false] [unknown] [truth] [unknown]

Such that ‘partial truth’ could be identified and recovered from a
context. Perhaps more evident, an N-value approach could allow for
further gradation, for instance identifying major or minor truth, or
probabilistic weighting:

[minor truth] = [majority falsity] [false] [minor truth] [unknown]

Where in identifying and recovering ‘minor truth’ and removing it of
its context of errors, it can represent ‘all truth’ if the impurities are
removed, contingent upon accounting for further or existing errors.

To further given a sense of evaluation, the original condition of
superposition may involve:

? = [truth-falsity] [truth-falsity] [truth-falsity ] [truth-falsity]

And in any empirical model the resulting answer or ‘truth’ is claimed
would always include the remaining unknown as the potential for error…

minor truth = [minor truth] + [unknown] minus ’falsity’

truth = truth + unknown

And so here’s the paradox, the problem with absolute truth in a worldly
realm, it that it cannot be modeled as ‘entirely true’ or ‘all true’
because it is contingent upon new information or uncovering existing
errors – so its state of being true or truth-status can seemingly shift,
here illustrated by percentages meant to convey something less than
100% true. So an approximation is needed anyways and can never
be removed, such that ‘known truth’ is contingent, tentative, yet can
be majority truth and overwhelmingly true, 99.99999999 percent true,
even, if allowing for such a conceptualization of proportion versus
.00000001 percent impurity that may exist given unknowns. Yet in
situations of paradigmatic change, if the model is wrongly grounded,
that nearly 100% truth that is believed with certainty could become
minor-truth overnight, such that it is 5% true when operating in
another model that is more accurate. So the basis for a given
framework of truth can be dismantled if it is not actually or
wholly true or contains structural errors. This is why censorship
is necessary to prop up bad ideas, to prevent influence from
greater accuracy from undermining the functioning of an inferior
approach. It cannot ground the momentum and instead is shaken
apart by the stresses involved on its structures, which cause failures,
shearing, collapse, creating dysfunction if not destruction because
the virtual condition is not actual, yet its contingencies need to
become structural for it to be sustained.

***

So, a binary ideologue could easily see their answer A as the clear
result of what is ~X~Y~Z, though believing this as the result of the
more pure equation XYZ. Their view would be ungrounded,
only virtual, allowed for lack of its logical accounting.

If truth recedes via its dilution via a sustained error-rate within
signs that seek to represent it, then it could be assumed that the
partial-truth is likely much less than 99% by experience with ideas and
categorizations in their ambiguity and reliance upon unresolved
distinctions.

A highly optimistic approach could assume it is towards 50%, though for
example and what I believe could be shown evident, is that in most cases
the truth is very small in its precise accuracy given the immense
framework needed to transmit it. In this way, the framework could be
error-ridden yet support and sustain a truth temporarily. And thus, in
some category or idea, the truth that exists may be small compared to
the way it is evaluated and the rough-edges may be necessary to support
the view. And yet this truth could rely on inaccuracies or vagueness as
the container or the scaffolding or structure by which be identified,
communicated or shared.

For instance, the truth in this present writing is sustained by the
ambiguity of all the surrounding words that allow its message to be
conveyed, yet it is minor compared to all the language that is necessary
to convey the framework of the idea, which still has not effectively
been fully communicated in its essence. Perhaps it is not a word or
sentence and instead a concept that spans across various other ideas
as a configuration.

The point that was going to be made about binary logic is that if
you have a minor truth that is sustained and represented by an
approximation, that within an equation or processing, that this
interaction of these minor truths that are instead believed maximal,
then tends towards zero or falsity as a result of the errors not being
accounted for yet being included in the final answer, as if pure truth.
If in the extreme the kernel of truth were 1%, then: ~X~Y~Z, each having
this minimized truth interacting within an equation, could become ever
smaller in the truth that can be shared in the same framework.

if ~X and ~Y and ~Z each = minor truth + majority falsity

and:

minor truth = .01 majority falsity = .99

then:

~X~Y~Z => .1 cubed, .99 cubed => 0.000001 truth, 0.970299 falsity

The original ratio of minor truth to falsity: 9.9, the new ratio:
970299.0

An exponential difference. Consider the inversion, where truth is moving
from:

0.01 => 0.000001  (1.030610152128365e-6)

As falsity is the basis for further and further interaction within
error-reliant language, it is proposed the truth within its context
further and further recedes towards zero, majority falsity, nothingness.

This is more a poetic and symbolic conveyance, yet the essential
condition is the proposed the same. That in allowing for impurities and
structural errors to persist in thinking, that in their interaction as a
basis for exchange, that they further accumulate as the context in which
truth subsists and also relies upon. A contingent truth could become
inseparable from its context of falsity because it is required to
sustain the belief that it is actually ‘all true’ in the sustained
false-perspective.

Now imagine N-variables, where trillions upon trillions of such
ungrounded interactions could take place within a context of onesided
binary reasoning, where minor truth could function as truth itself, how
the falsity would build up in this way. Where flawed models used in data
mining and automated analytics, biased processing, then include errors
and falsity in calculations that results in standardized views that
reinforce the given worldview, however ungrounded it may actually be.

If binary it could be assumed each variable is interacting with each
other at 100% truth, thus A would arrive at 100% truth via calculation.
Whereas if truth is only minor and only 1% by comparison within the
container that represents it, then in the calculation if each are
towards 1% and say it is an issue of multiplication, that the ‘answer’
tends towards absolute nothingness via such calculation even while in
its partial-truth it can be believed or identified as ‘wholly true’ due
to binary ideology. In this way an infinitesimally partial-truth that is
used for calculation can be viewed as 100% true by ignoring the
ambiguities (grey-area, paradox) and making it a simple decision,
if there is truth it is totally true, in a given relativistic framework. In
this way 1% truth can equate with 100% truth, or a partial-truth
can be equated with absolute truth.

Someone could use a binary mindset and go through the world believing
only in their partial view of things and equate partial-truth with
truth, and the worldview could correspond to the external signs
surrounding them, that their beliefs accurately map onto external
reality and its dynamics – and yet this could be an entirely ungrounded
relation, observation, where minor truth stands in and functions as if
absolute truth, because it is sustained in language and mathematics
accordingly. Yet it could be false, an unreal condition, contingent upon
the issue of logic and accounting for truth beyond a given relativism.
It is why the ungrounded observer by default must be made infallible,
with power to determine what will be acknowledged true by their limited
and warped perspective and onesided evaluation.

Someone who functions in a grey-area consideration, of 3-value or
N-value logic, must also mediate a realm of partial truth, due to the
condition of language and the way things exist today.

They likewise may encounter the enigma of minor truth within a
situation, yet instead of ignoring it or exploiting it as all true, it
can be accounted for if removed of extraneous errors. Perhaps this is
the purpose of deconstruction, allowing the dismantling of information,
concepts, ideas, to better separate truth from its scaffolding.

What this approach allows is a more accurate evaluation of A by
accounting for its partiality. ‘A’ removed of its partially then is
more-A than not. If minor truth exists in an equation, by comparison,
then if it is acknowledged to exist and the complexity of the evaluation
is acknowledged, then the error-rate involved in the too-simple equation
can be accounted for, accommodated, adjusted or corrected presumably, to
some extent, although it is seemingly not possible to remove majority
falsity from existing language in its given form.

~A => ~.000001 for example, can thus by accounting for the minor truth
(“quasi-millionth of a truth”) as its truth, evaluate ~X~Y~Z as XYZ if
these known errors are removed (.999999 falsity). In this way, by
acknowledging the partiality of the answer, ~A can become more-A or tend
towards A again, even though dealing in a realm of minor truth, its
kernel. That is because the .000001 percent truth, in this fictional
calculation, being the only truth, becomes truth removed of its
partiality and thus made whole , or 100% truth contingent upon the
hypothesis.

Note: the symbolic calculation here of 1% or (0.01)(0.01)(0.01) as
~X~Y~Z is meant to convey how whatever partial or limited truth, in its
interaction with other truth, tends towards zero in further interaction.
To clarify, this is meant to convey a situation of error-ridden
information as the context for whatever small truth may exist, and in
the interaction of these truths, shared or unshared via intersecting
structures, it may be reinforced or limited, yet what does exponentially
grow is the noise, that is to say that ‘truth’ is not necessarily
becoming less true, though it is likely becoming smaller within the
context of what surrounds it as falsity or error, and thus the error
increases via the further and further reliance upon error for mediating
truth. This is why something that is partially grounded or ungrounded
can have a deteriorating effect upon the exchange of truth in such an
interaction because it is being devalued by the errors which seek to
sustain it, which then become representative of the said truth, which
thus can be believed to _include the errors as truth, which is what
makes ideologies so powerful if uncritical.

So a condition of ~A=> ~X~Y~Z can be corrected for by paradoxical logic
and reestablish or ground the initial equation once again, though in a
context of actual truth in whatever degree it actually exists, versus a
binary view that what is calculated is what is true, without correcting
for the errors.

In this way, for the binarist who arrives at A, it tends towards total
falsity or zero, and only will ever be in a realm of ~A if ungrounded
and unacknowledged because the sign itself is the form of mathematical
validation. Pattern matching in some framework, with a distorted
assumption about how truth and logic operate.

In contrast, the 3-value thinker who identifies minor truth as part of
their empirical evaluation and self-corrective modeling, then can
acknowledge what kernel of truth exists, however minor it may be amidst
a wasteland of extraneous or temporary information or signs needed to
sustain it, though through doing so, capture the minor truth that exists
within a condition of falsity or partial-truth and in this way recover
it, claim it, account for its truth. Removed of falsity and error, it
tends towards 100% not 0% accuracy, not more truth, more pure truth,
removed of its known impurities as is the empirical requirement.

The 3-value observer then takes .000001% as the answer, and contingently
it can function as ‘A’, contingently 100% true (this process of
refinement seemingly allowing for a pragmatic binary judgment) though
remaining in a grey-area of consideration and thus likely retaining
unknown errors. That is if what is recovered is removed of all known
impurities, not-truth, then it would tend towards total truth versus
total falsity, if logically it were accounted for, its every aspect.

The binarist believes they function at A => XYZ when the truth they
observe is minor, tending to zero, and only ever remain in a framework
of ~A and ~X~Y~Z yet cannot allow for this or acknowledge it. Thus it is
possible to believe what they perceive is 100% accurate and true by
default of their processing of it, yet this view is sustained by
structural bias, onesidedness of evaluation, and distorted frameworks.

The 3-value thinker believes they function in a realm of ~A => ~X~Y~Z
which is the context for discovering truth within the surrounding
ambiguity. And in this way the specks of truth recovered from ~X and ~Y
and ~Z then become removed of the extraneous of their partiality, and if
this can be accounted for prior to further calculation, and become the
structural basis of evaluation, calculation via logic, that the minor
truth is maximized in each variable as ~X becomes X via this refining.

The minor truth of ~X as .01 becomes the contingent truth of X as 100%
if removed of its impurities. If this occurs for XYZ together, then the
truth within them is no longer minor, it could be major, such that truth
is interacting with truth in the equations, as signs representing truth
as variables, ‘this’ equals ‘that’ as an accurate observation. Which
seems to be the presumption of grounded mathematics, that such worldy
calculation is indeed possible where mental modeling and external
existence are closely aligned. Perhaps in some scenarios this is the
default, though at some point language creeps in and predominates yet
presumes itself under the guise of mathematics, devaluing its truth.

The standardized test as example, especially multiple choice where
ambiguity may exist and allow ‘valid’ alternative answers yet these are
ignored and punished for a binary assessment, which leaves a paradoxical
thinker to try to unthink the higher-level analyses which then devalues
their own reasoning for more rote memorization of ‘correct facts’ versus
actual analytical thought about questions. Then, to add number
correlating with these tests, as if quantifying aptitude when instead it
is more trained obedience in ideological frameworks that are themselves
ungrounded from the reality they sustain themselves within.

It is the backwardness of logic, the dumbed-down version dictating its
reality, over and above more accurate and insightful and honest
interpretations. Only one view is right in the binarist’s calculating
mindset, theirs. Either you conform or you are wrong. And this is the
view that functions within computers and technology today, and also is
driving public policy. It can be completely obliterated in terms of
logic, completely destroyed every argument and ideological position.
There is nothing to it, simply accounting for truth. There is no there
there in the binarist mindset, it is a bluff, centered on nothingness,
ungrounded in actual evaluatable observations. It is an infallibility
complex allowed for lack of error-checking and correcting, the
requirement of a responsibility of thought. It is unnecessary in that
viewpoint because it is presupposed superior yet without the substance
– only the image or sign, and its true belief – not in truth, but in
itself, as ultimate arbiter. It is essentially a type of anti-reasoning.

In contrast, the result of this grounded processing is that the
interaction of truth within XYZ, a purified version of ~X~Y~Z, then
tends towards truth itself, with minor impurities remaining, though
tending to 100%.

In this way the grey-area thinker can account for minor truth that
becomes maximal, where A => 100% ‘true’ via analytic reasoning that is
grounded, if accommodating and accounting for truth. It is not a
presumption of ‘knowing everything’ or having access to ‘all truth’ and
always being right and correct; instead it is a logical evaluation of a
given model, or concept, and ‘truth’ in that context is accounted for,
within a given boundary or circuitry or hypothesis. And from what is
known and unknown, truth ascertained in that scenario, given the
variables, which perhaps are only temporary or highly specific. It is
not that the truth is always the same, if the context shifts or changes,
as its structure may no longer exist or may collapse again via other
dynamics. It is more as if truth is being referenced, acknowledged, and
within particular conditions can be observed in more pure states, which
is the realm of logic and considered thoughts. Just because someone
acknowledges truth exists or seeks to serve it, does not mean they know
‘all truth’ or have access to ‘all truth’ in all situations at all
times, yet in a binarist viewpoint that is exactly what the default
condition is because the observer is infallible. It is a dangerous
realm. To put it more plainly, a binarist could equate
(minor-truth)(minor-truth)(minor-truth) = 100% by default of their
equating some truth as absolute truth, when instead it could tend
towards falsity, zero, while believing they have arrived at the answer
and know things because they can count and speak.

Whereas a grey-area thinker could evaluate the minor truth in ~X~Y~Z
and remove the condition of partiality it relies upon in whatever its
existing context, say short-circuiting language, and then allow for the
evaluation of (truth)(truth)(truth) => truth. Which instead more likely
does tend towards 100%, while still acknowledging unknowns and other
unobserved impurities.

This is why paradox and contradiction is so important, because in
identifying ambiguities within otherwise functioning models of truth, it
allows for error-correction and requestioning of hypotheses, their
improvement or reconsideration – and in this way if rigorously
practiced, leads from lesser-truth towards greater-truth via working
through and resolving such inconsistencies. This acknowledgement of
error and paradox is essential for grounding ideas and observations,
without it, ungrounded beliefs take over and become separated from
reality, where the observer does not account for their influence over
their observations and in this way can skew or determine what they see
to fit their given models by ignoring truth or contradictory
information, censoring or denying it if not outlawing and making it
illegal so to sustain a given worldview they remain in control of.

The more complex things get, the more these different mindsets go to
polarized extremes, one evermore grounded and the other ever more
faith-based in its own self-righteous interpretation that only allows
for its truth on its own terms, even if it is actually in error or
reliant upon it for its validity. Versus a faith based within truth
itself, not the observers own egotism.

If someone says ‘people’ x ‘society’ x ‘work’ = ‘productivity’,
a binarist could evaluate this and arrive at a model that may be
effective in communication, yet unreal from another point of view
that is grounded. It depends on how the signs are evaluated, their
integrity as models, perhaps they are each biased so it is only
‘some people’ x ‘some society’ x ‘some work’ = ‘some productivity’. It
may not include everyone. This is subjectivity, the language uncorrected
for as ideas, yet presumed right by default of categorization, that the
sign is itself truth, such as people = all people. Most things are like
this, whether purposefully or not. For instance, if not everybody is the
same yet assuming they are, then making calculations, how this at some
stage could plateau for whatever truth could be sustained inside its
view, yet the inaccuracy becoming a boundary or limitation that in
interacting with other distorted information or ideas, tends towards
falsity in their interaction. The only way to deal with this is through
empirical analysis and grounding of observation within a single
common framework that can be referenced as a shared model
and also removed of every identifiable error.

Without this, language has no stable meaning, it remains in a realm of
the ambiguous, reliant upon it for communication, which seems to be
directly tied to its linearity and long-form scripted transmission,
versus more conceptual frameworks and diagrammatic models of ideas
that are outside certain or given approaches. Perhaps that is why the
computer exists, to provide language its next platform for further
development beyond the page or through it. Moving into and through its
logical structures versus trapped outside on its shiny yet deceptive and
shallow surface.

Such modeling could tend towards 100% truth in its interaction, such
that exchange of ideas could reference and test a common empirical
model that is dealt with by all people who mediate it as language. By
contrast, linear language tends towards overall falsity in conveyance
except for the transport of minor truth – and this is its tremendous
inefficiency and ineffectiveness, short-circuiting.

The issue this involves is recognizing the existing state of language
and observation and interaction as it tends towards a state of falsity
by default, and the enormous effort it takes to achieve, recover, and
share minor-truth, so that it can become the groundwork for shared
relations. Yet even then it remains unstable, until a new model of
language can become the foundation for new relations, communication,
exchange based on error-corrected ideas and access to the purified
versions that become the basis for education, learning, growth, economy,
everything. This is where circuitry relates to concepts, as the role of
logic moves into structures and forms, so that patterns or dynamics can
be identified via their context or configuration – if not perspective,
given scenarios – and this would be the grounded truth of relativistic
frameworks, to see things from a particular facet or vantage that
uncovers or reveals truths otherwise not yet accounted for. And so these
dynamics in a shared modeling, where hypotheses exist as models and
can be tested, challenged, and errors can be found that are not wrong in
being there, only in being uncorrected for, addressed, such that they
can function also as discoveries as with the great moments in
scientific experiment.

It is with a broken brain and little working memory these fragments of
thought are conveyed, the attempt at math humorous here, obviously
incoherent in realistic terms yet as a sketch of what can be proven in
terms of the claims made (even beyond ‘you cannot prove, only disprove’
being repeated as a truism to prove its claim). That is, while having
not attained accuracy or effective examples for probabilities, the mush
of the presentation, that this has been worked out previously and should
be sufficient to convey the basic idea that most mathematical minds
could further as need be to clarify, or perhaps linguist as the case may
be. This is not meant as bravado, only common sense, that if ideas are
removed of truth, their interaction in biased reasoning and exchange
tends towards falsity. And that this is a common condition that needs to
be mediated. By correcting for it, temporary truth can be evaluated at
higher fidelity, a more pure and accurate state, yet still collapsing
due to language.

Paradoxical or 3-value logic allows for this. And thus such logical
reasoning can tend towards truth. Whereas for the binary thinker, their
assumption of truth leads them towards greater falsity and error. And
perhaps the thing is, paradox incorporates 2-value logic by what is
retained as truth, so it may be issue of how to manage binarism, rather
than existing outside of it or beyond its dynamics. What this may
indicate is that binarism could be grounded within a context of
paradoxical logic or 3-value logic, and thus a contingent binary view
could exist which does mediate events in true-false evaluations, though
they are of a different kind of error-correcting rigor that is not
present outside the 3-value context, because oftentimes there are
unknowns or only partial-truths to mediate, to distill truth from
falsity instead of accepting or rejecting observations without this
process of refinement. People in this way who are grounded in logical
reasoning could function with truth as their directionality, perhaps
even “biased” towards it, as a type of compass for navigation, as with
acknowledging falsity and in this way operate within a binary framework,
yet not have the issues of binary ideology and inaccurate binary
processing that absent paradox cannot sustain this balance within a
larger ambiguity. Or perhaps the ambiguity, by being addressed in
3-value logic, then allows for 2-value logic to finds its grounding. In
whatever way, both the black and white views can co-exist with a context
of grey-area and its range across these, though it would seem a
progression from 3-value to 2-value, and then ultimately 1-value only,
truth itself. This is another question about how such structures could
be correlated, yet the idea seems to be that ‘truth’ is functional
across these related evaluating mechanisms, and hierarchically
interconnected even, where ‘truth’ provides structuring, even for
conditions of identifying falsity. This language and my mind are not
prepared to explore this in any depth at this time.

Now consider if computers and technology can only process things in
binary terms, and that programming is occurring in binary terms, and
relations between things and information and ideas are being mediated in
binary terms, including major themes like economy, poverty, health,
governance, law, art, environmentalism.

Perhaps the Two Cultures of C.P. Snow involves mathematics and language
as they could be mediated by those who are predominantly binarist, and
the worldview that results, versus those who are paradoxical grey-area
entities who are aware and observe truth and seek to acknowledge and
remove errors in their observations and interactions with others.

In such a way, binary ideology may transform mathematics into
subjective language and turn it into an issue of ungrounded faith
and belief – ultimately in themselves – if not in science as the answer,
its priesthood to preside over populations.

Whereas a 3-value thinker may see the common relation between
mathematics and language within logic, the shared structure and origin,
if not in circuitry then patterns or other conceptualization, though of
a calculus of each, where transformative processing occurs and perhaps
is not even separable. The potential to correct for errors then leading
to essential structures or conditions, like with sculpture removing what
is not meant to be there and then seeing the thing in itself that
remains, as a model of the idea. Perhaps mathematics and language have
a functional relation, one with the other, an ecology between them that
would be revealed in a unified modeling where logical operations between
them may be integral to establishing an empirical model where number
and letter exist in coherent symbiotic relation.

It would seem this is the core of what a computer would need to become,
if to process ideas and information accurately as patterns, the numeric
of calculation tied to its logical processing, as information is
grounded into the shared model, run and tested against its
configuration, ones and zeros and partial truths and partial falsities.
That kind of fidelity is what is needed and yet does not yet exist.
Simply to communicate and share ideas ‘in truth’ versus within a
scaffold of noise tending to falsity.

In this way, this imperfect text, its errors, attempting to share its
minor truth.

Note 1: Questions social relations could involve how a condition of
‘minor truth’ interacts with other ‘minor truth’ in a condition of
falsity; or a ‘minor truth’ relating to a condition of ‘truth’ in terms
of exchange. Perhaps those in ‘truth’ would be devalued in the exchange
if having to mediate ‘majority falsity’ in a given framework to mediate
the minor truth. Else, could accommodate or neutralize these errors via
boundaries or limits to interact only with the minor truth and not its
impurities of context.

Note 2: the grey-area or paradoxical logic is functional in the realm of
partial truth, it is optimal for mediating it, such that ~A and ~X~Y~Z
as a context are considered natural conditions to work-through via
3-value and n-value considerations, whereas a binary mindset cannot
allow for the ambiguous reality and forces an interpretation of ~X~Y~Z
into a framework of XYZ and especially of A as a preordained
already-answered question, needing to know the view is ‘right’ and not
‘wrong’, thus incapable of allowing for either self-error in observation
or self-correction in deficient action, perhaps not even being able to
question, due to the bias of needing to know and maintain existing truth
which external information must conform to by default. This is how
ignoring truth can function as evil, even without intent. It is
basically an inability to think beyond existing operating limits and the
need to deny reality which contradicts or stands outside the given
worldview which is ‘all true’. Interactions between these peoples
difficult because resolution is unlikely and often views are polarized.
The binarist only accepting truth as it is interpreted in their
framework, not allowing value to truth beyond it, and if reasoned with
by 3-value logic, its truth is invalidated, via the onesided interaction
therefore only verifying the binarist viewpoint in the exchange. Its not
about truth in this way, it is about controlling it, and some may be
self-aware of this and lying to hold such positions though others may be
true believers and place themselves as finite entities as if omniscient
overseers without the accompanying knowledge or wisdom for that vaunted
position. These are everyday psychological interactions in the landscape
of people that are mediated within ‘logical reasoning’ though oftentimes
binary logic is ungrounded and paradoxical logic is the basis for a more
grounded, error-correcting POV.

Note 3: Perhaps the closest corollary to any of this is Plato and
speaking of the alphabet reflected in the water, which brings up
the issues of reflection and language though also of mirroring.
The mirror as a concept is a philosophical question- how do you
approach conceptualizing its paradox. B = B is one view that
assumes an absolute likeness. Yet without a mirror, what is
seen is also bounded, and so perhaps B = ~B is close to its
actual functioning, if not: B = [virtual-B] to some extent. The
mind-body relation, information and its materialization at this
boundary. And how do you deal with it, if not align or also
transcend it.

The thing is, if the relation is grounded, B => B could seemingly
correspond within certain parameters, potentially. And if it were
ungrounded, a person may look in the mirror in a state of ~B and see
back B as their self-image, via inaccurate reflection. This is more
involved than this approach, to clarify, yet this mirroring functions
very much like the issue of language which reflects a state of identity
through authorship and communication and reasoned exchange.
A person may ‘believe’ their view is public, though it is carried in
an error-reliant state of distortion, warping, and skew that is not
accounted for, and is thus a subset that seeks to represent the
public in its entirety by this view, without having to deal with the
actual accounting for its actual truth. In this sense the funhouse
mirror or 2-way mirror, if a false-view is presented, which then is
mediated by language in this way. So too, newspapers, television
shows, websites, computer software mediate the exchange,
providing a mirror for the self. So what if what is reflected in
mass media is inaccurate and errors are not accounted for
in what is claimed to represent shared truth.

XYZ as an equation could actually be ~X~Y~Z full of skew, reliant on
massive errors and only contain a partial truth, yet be presented if not
believed as the pure answer ‘A’ by onesided biased viewpoints reliant
upon binary processing/thinking. The mirror condition as corollary then
has XYZ on one side and ~X~Y~Z on the other, one pure as an idea and the
other impure yet this impurity not accounted for, simply “believed” to
function this way, if not simply in terms of its physics, its power as
language to engineering dynamics and force frameworks via beliefs that
have effects, regardless of greater truth involved.

The question perhaps it involves is what if an observer does not see
themselves in the mirror accurately, then how can they go out and
observe other events outside themselves or even make determinations
for others if not operating in a truthful framework or grounded
observations. What if society allows for this, promotes it even, rewards
such behavior, for instance, making lots of money via such
approximations and funny ways of accounting for existence. The question
of the mirror state is more interesting than this and is not adequately
approached or engaged in this, yet what if the state itself cannot look
itself in the mirror and see itself accurately, and that language is
this distorted barrier condition that allows for the illusions to
persist. The virtualization of reason versus its grounding, allowing
many negative dynamics to be solidified for lack of accounting for truth
and demanding its adherence versus making it expedient and only
symbolic, a placeholder for power which replaces it.

Note 4: Listening to music, very seldom hearing something related to
anymore in the current era, though occasionally hearing a sound and
enjoying it, synchronizing with it, only to hear a lyric or phrase or
thought in its ‘thinking’ that is not of the shared condition, that then
causes the shared condition to fall out of orbit, only a temporary
unsustainable correspondence. How much of everything is like this.
Where A = A tends toward A => B, or various other sequenced relations.
This is also superposition, a contingency in relations, what aligns and
what falls out of alignment, given the scenarios. Somehow shared truth
went astray or could not be sustained. What happens if that condition
involves society itself, the planet, its operations. It cannot accurately
see itself, oftentimes hiding behind signs and representations, unable
to see itself accurately or disallowing honest appraisals. Instead
requiring ‘belief’ function as the highest truth, obedience, conformity.
Logical reasoning can deal with this through direct engagement,
grounding situations in a context of truth (1) and falsity (0) for
empirical accounting, a more accurate approach to quantification
than simply using numbers and calculating.


[ the future > transformation > language > modeling ]

http://amsconcept.wordpress.com/future/note/

#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org