Hi,
Exchanges on declarative programming virtues are quite interesting. Yet the
still relatively low tracking of xml and xslt, even after some decades,
seems too striking.
Out of a far field, fully biased, backed by decades of R&D on natural
phenomena, distributed 3D multimedia, virtual environment design, and
industrial collaboration portals, constantly confronted by complex
entitlement security issues, then having a group of experienced architects
try to better understand underlying causes & principles, realizing that
better understanding intelligence was key, it might now be possible to more
formally introduce different approaches, perspectives, or values.
Despite all existing work on intelligence and AI the first challenge was
trying to find a clear actionable definition for intelligence, which ended
up as: "the ability to process knowledge". Simple enough, yet the next
questions proved to be quite interesting and included "what is knowledge?"
and "how is knowledge processed?".
Without going through too much detail here, let's just note that knowledge
is the structure of reality, as well as models thereof. Knowledge is a
natural phenomenon from which everything evolved. That is why it is so
intuitive and subconscious. Understanding the structure of what exists is a
prerequisite to evolving it.
As a fundamental natural phenomenon, knowledge is very different from
convention-based information. Information is a communication tool, which is
a collaboration tool, which is a knowledge tool.
In this strictly causal, hence relative, Cosmos, knowledge can further be
defined as the art of qualification, and the key to its structure and
operation lies in modeling and managing qualified relationships. More so,
entitlement security is naturally embedded into knowledge.
In any case, meaning and significance are rooted in knowledge, not
information. Compared to knowledge processing, information processing seems
rather limited.
Because it is so natural, intuitive and subconscious, nobody needs to
understand knowledge or its operation, in order to use it, unless one is
trying to make them "artificial", computerizing them. Accordingly, the
first prerequisite to "artificial" knowledge processing is the adequate
understanding of its natural phenomenon and operation.
Like all things, "adequate" is relative, and, for example, based on good
intentions, RDF/SPARQL empirically attempt to represent and manage some
structured information, imposing a somewhat superficial representation and
semantics frame that limits effective knowledge modeling. For example
still, RDF supports modeling relations, yet, without effectively
understanding qualified relationships. Consequently, one would have to
fight RDF semantics to try to effectively represent knowledge, a loosing
game. Fortunately, this is not the case for XML which offers comparatively
very little semantics framing, remaining supportive of almost any modeling
approach. Similarly, querying & matching knowledge patterns & resources
seem quite indispensable, yet remain knowledge processing tools, along with
and an embedded within quite a few others.
More so, naturally, knowledge processing is a parallel stream
transformation operation: through internal metabolic processes that
constantly optimize and evolve knowledge-background resource streams, as
well as through external sensory and motor stream processing for
interfacing with the "outside" world. These external sensory & motor stream
processes also require complex knowledge/information conversion processes
that infer knowledge resource streams from information streams, on input,
as well as project knowledge resource model streams to information streams
and artifacts, on output.
Current AI is sophisticated automation, relatively adequate for inclusion
in an artificial mind's reflex, habit and automation system, a key
component of knowledge processing and intelligence, but it is not really
intelligence, especially as automation does not really understand knowledge
nor qualified relationships, nor metabolic knowledge processes, nor
meaning, purpose, judgment, tolerance, entitlement, ethics, meta-cognition
or consciousness, for example.
Providing standards, as well as extensive support for universal
rich-content representation, declarative approaches, functional
programming, sophisticated in-line matching, querying, parallel processing,
streaming, layering, transformation pipelines, and more, XML & XSLT
technologies seem much more appropriate and useful for knowledge modeling
and processing, hence for effective artificial intelligence.
In summary, it seems that the future of XML and XSLT, although potentially
wide ranging, might be greatly powered through effective artificial
knowledge and intelligence.
The recently published "Artificial Knowledge & Intelligence", by Akhu Sono,
https://aki.AkhuSono.com, can provide some additional introduction and
reference on some of this.
Regards.
|