51
Creating a knowledge architecture for socio- economic policy generation MSc Dissertation Paper Paul Suciu (77111757) 18000 words

Creating a Knowledge Architecture for Socio-economic Policy Generation

Embed Size (px)

Citation preview

Page 1: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

[Year]

Creating a knowledge architecture for socio-economic policy generation MSc Dissertation Paper Paul Suciu (77111757) 18000 words

Page 2: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

1

Contents

Introduction.................................................................................................................................................... 3 Heuristics and iteration............................................................................................................................. 5 Policy literature review………………………………….............................................................................. 8 Introduction to knowledge architectures…....................................................................................... 14 The project work logs.................................................................................................................................. 35 Conclusions...................................................................................................................................................... 47 Bibliography..................................................................................................................................................... 51 Annex 1-9…....................................................................................................................................................... 55

Page 3: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

2

This paper proposed itself to investigate the application of open source knowledge architectures to the volatile field

of socio-economic policy generation, starting with an exploratory research, grounded in heuristics and iterative

methodology and in the philosophy of deconstruction. It then moves on to a more pragmatic implementation

through the use of the Liquid Feedback crowd-network policy enabling platform, showing both the capabilities of

the software and possible pathways of improvement of its consensual decision making method and the quality of

the policy generated. Finally, in the study case of the EU labor market it attempts to enhance the policy generation

process adding end user friendly populated visualization methods, such as the Timeglider widget API.

Paul Suciu

Page 4: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

3

Introduction

Background

The main reason why this will be a communications oriented perspective on

economics and not an accounting one, is that as a one year master’s graduate in Accounting

and Finance I felt I couldn’t speak from an authoritative position on any particular economic

policy matter. 1What I can do however, after a four year BA in communication and various IT

and project management experiences is to envision/facilitate a process of economic policy

transfer/decoding at community level2.

Through experience I came to the conclusion that personal opinion in enabling change

is rather superfluous. I used to engage in fierce, endless debates on matters of policy, which,

because of my training as a communications specialist, I would often win, until I came to

realize that what matters in practical community issues isn’t a private opinion, but the best

possible opinion adopted by the largest majority and that the process by which one arrives

there has a major impact on its finality.

I was trained as an economist, but discussing complex economic frameworks is a highly

problematic issue, as the level of inconsistency present within the field has led to calls from

academics of deeming it a pseudo-science (Taleb, 2007 and Zhu, 2009); the name Economics is

misleading, as a variety of properly supported codes and languages are gathered under this

misnomer (Zhu, 2009).

So why did I bother paying 5k/year for an Accounting degree, when my final paper is

so removed from the field? It has to do, from my perception of the fundamental enforcement

of business realities within an institution. Money and their flow, allow for survival within

Darwinian capitalism and it’s this survival that offers legitimacy to an enterprise. As an outsider

I lived under the illusion that the field is forced by its numeric orientation to adopt a much

more rigorous framework of systemizing reality, than your run of the mill social science3.

1 As unfortunately, a skills based program such as my Accounting degree hardly renders itself to

academic banter and critical thinking, beyond its immediate numeric concerns. 2 Pessimist would say I am deluding myself, but I have in fact witnessed a number of times (mostly

studied the cases) how policy was in fact changed by individual action, with substantial results. A recent example would be the defeat of ACTA at EU Parliament level, after the mobilization of the IT community. This particular instance didn’t take years, but merely weeks to accomplish. In my view, as I kept a close watch on the issue, the debate came down to one issue, on how the online community managed to organize itself in a superior manner against a back door policy laundering lobby sphere, through better IT literacy. 3 Where in the world can one find a measure of stability if not in the most fundamental/stable part of

the economy? I was, of course, unaware of the many compromises that currently exist within the IFRS adoption of principles , sources of friction or debate.

Page 5: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

4

I do however remain positive and in agreeing with Gabbay and Hunter (1993) that

meta-language/rules have the role of reducing inconsistencies in an improperly formulated

language and having noticed an attempt to methodically implement such a language within

the IFRS framework, I decided it was worth a closer inspection.

While codifying the IASB framework, the IFRS aims to create its own meta-language,

which can bring some consistency to a field marked by fundamental changes in the recent

years4. It is a slow and arduous process, taking years between agenda consultations and post-

implementation reviews5. All the while, the IFRS has also attempted to codify accounting

principles for machine code, so it in fact created a parallel, this time rather proper meta-

language in the form of activities the “XBRL taxonomy for IFRSs and the IFRS for SMEs to

facilitate the electronic use, exchange and comparability of financial data prepared in

accordance with IFRSs6”.

This for me was an example that a concerted effort could be attempted towards

codifying an entire epistemological field and that the symbolization of a limited reality was

possible and that given enough resources one could arguably systemize the extremely volatile

field of policy making7.

The main difference between the IFRS approach and my interest lies in the backing,

while IFRS is supported by private initiative, the kind of policy I envision involves the larger

public, organized in a community of thought. It is my strong belief that eventually the

community approach to socio-economic policy discussion will be the only established one,

acting as a foundation on which the private sector will be able thereafter to build a unified and

coherent framework of business that I can unapologetically adhere to.

4 It is also an extremely flawed process, highly contested by its many contributors, even at the level of its most basic assumptions, such as the asset definition. 5 http://www.ifrs.org/Current+Projects/IASB+Projects/IASB+Work+Plan.htm, example and current developments 6 That is because machines don’t understand the nuances and possible contradictions of human

communication and need a clear code to parse . 7 And the above is not a singular model of development, with similar areas of policy formulation being

undertaken at all levels of the EU and beyond, from a variety of perspectives and interests.

Page 6: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

5

Heuristics and iteration

Research question

Is it possible to build a knowledge architecture for generating higher quality/quantity

socio-economic public policy, moving from the current least common denominator/populist

approach (by eliminating various types of bias) to a better interaction/utilization of the mass

user? Can we also make sure, through the utilization of open source software, that the

emerging user community has the tools to re-actualize itself continuously in such a manner

that it will improve upon the policy generation process? What are the current developments in

the field and what can be done to improve upon them? Ideally, can we build an observational

model/proof-of-concept for the theory identified?

Community of intent/community of knowledge/community of production

By bringing community support into policy generation we are attempting to raise the

quality of the political discourse and create a better product, both because of the higher

numbers of interested individuals involved and because of the easier adoption of policy, as the

impacted group would be same one that generated it. However, since we aim to avoid ending

once more with the lowest common denominator from a crowd unable to articulate a

consensual efficient/final position8, we must also describe a mechanism of participatory

learning and genuine executive capabilities within the community - which raises the major

issue that before anything else we must envision/create a community9.

There should be an economy of production in relation to policy, just like with any

other commodity. In the same manner the management accountant has at his disposal a

formidable ERP system10, that he can feed data and receive results and updates within

minutes, so should the political individual be able to make up the best plan, based on easily

understandable, processable and transferable data (therefore valuable to him). Proving the

8 Both the delegitimized traditional power structures and the grassroots activism movements that seek

to replace them, suffer from the same weakness, a difficulty to articulate purpose and the lack of clear operational and managerial frameworks (Tushnet, 1995). 9 “Social formation in the last instance... is not the spirit of an essence or a human nature, not man, not

even men, but a relation, the relation of production” (Althusser, 1971, observation on Marx’s social materialism). It is necessity that moves people to act together and not idealism. This is supported by Charles Pierce (1931) “it is hard for man to understand this, because he persists in identifying himself with his will”. This limitation of the self in respect to production output is a truth that most individuals come to realize on their own and it enables them to seek the support of others in tackling difficult issues. And concerted effort does mean co-opting as many members of society and giving them the opportunity to satisfy their own needs, thou in this case mostly the need for self-expression and legitimacy at/through community level. 10

Electronic resource planner

Page 7: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

6

regular citizen with easily manipulable to support his decision making process in matters that

concern him, is an imperative dictated by the “rule of many” 11democratic principle.

Why the need for higher quality/volume information? The problem is that current

policy debate/political discourse are often reduced, because of the exploitation of cognitive

bias, to the lowest common denominator (populist approach) expressed to only a few points

of view, mainly the binary of left and right, relegated to a four year circle and highly

unsatisfactory in a consumer society where individuals get to vote on preferences almost every

day12. Without a sense of control over the political process and therefore personal self-

actualization, we end up with most of the voter core feeling delegitimized and unmotivated.

Heuristics13 and iteration as a research method?

There are a myriad of issues to be tackled in a practical policy design implementation

as opposed to merely defining and analyzing in a standardized format an academic issue.

Theory and practice aren’t perfectly aligned even when one knows exactly what the outcome

should be, never mind when operating on a fluid concept that changes as new data becomes

significant through exploration.

While the project design and scope might change, the one thing that cannot be

changed is the limited capacity of one individual. In a world of incertitude and change it is now

recognized that the human mind employs a variety of shortcuts, which were only fully

recognized when the same principles had to be employed by IT in designing programming

languages14. These methods of learning are what we call heuristics and iteration15.

There were many instances where I operated in the dark within the project, especially

within the programming environment, based only on the conviction that I will succeed in

overcoming any obstacle, if merely by following strategic/topical cues and guiding myself not

on the principle of “the best solution”, but on “the convenient solution”. This is what

Wikipedia16 defines as “heuristics” referring to “experience-based techniques for problem

solving, learning, and discovery. Where an exhaustive search is impractical, heuristic methods

are used to speed up the process of finding a satisfactory solution. Examples of this method

11

“rule by the many” is “polity” (which gave us policy) in its ideal form and democracy in its perverted form, according to Aristotle’s theory of democracy (Encyclopedia Britannica Online) 12

The ubiquitous “Like” button. 13

Cognate of “eureka or heureka”, meaning “to find”, refering to a trial-and-error method of investigation, when an algorithmic/structured approach is impractical http://wordinfo.info/unit/%20781?letter=E&spage=6 14 In computer programing a code is parsed or interpreted by a program employed by a PC or network of PC’s according to an algorithm and set outcomes. It is possible to operate an algorithm in the presence of uncertainty by applying heuristic methods of outcome, where uncertainty is either rounded up to the closest convenient/practical value or completely ignored. 15

while these terms are not known to most socio-economic scientists, they are extremely familiar to programmers, as creating a program that will not crash on meeting an unknown/new value is a constant design challenge. 16

http://en.wikipedia.org/wiki/Heuristics

Page 8: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

7

include using a rule of thumb, an educated guess, an intuitive judgment, or common sense.”

While the Wiki quote might seem mundane17it does however provide a link to the pragmatic

interpretation of heuristics as a method in IT where according to Pearl (1983) “heuristics are

strategies using readily accessible, though loosely applicable, information to control problem

solving in human beings and machines.” We see that the programmer makes little distinction

between individuals and machines in this context, as language parsing is an inherent function

of both the human thought process and

Iteration on the other hand is a lot easier to understand, as it means “the act of

repeating a process usually with the aim of approaching a desired goal or target or result. Each

repetition of the process is also called an iteration, and the results of one iteration are used as

the starting point for the next iteration” (Wikipedia page). In respect to this paper, not only am

I following the iteration/heuristic model, but I aim to make it a part of my design, to transfer it

to the crowd as a method of learning and generating policy output (and as I mentioned earlier,

the project conceptual/building process and its ultimate functionality are intimately and

inexorably linked).

You can understand that attempting to frame theoretically a real life process is a

massive hurdle. For one thing, there’s a limitation on the theoretical specificity one can bring

to the issue, otherwise the ramifications would make it impossible to conclude the project. A

degree of specificity is a MUST, otherwise there would be no discernable original ideas in a

wide field of knowledge.

I wish I could say that this research is about analyzing the primary data generated

through external user interactivity with the exploratory tool, however since the building

project is a part of a much bigger picture, extending at least a couple of years into the future,

that is not the case. Instead the focus of this particular Master’s paper will be in analyzing the

qualitative primary data generated by managing the building/integration of the various policy

generating/enforcing features within the IT platform and the ideas and threads generated by

such a specific endeavor. Beyond the numerous technical details, there needs to be a real

focus on the need to deliver a “true and fair”18 policy representation perspective. In this I

choose to believe that I can borrow heavily from the structured approach promoted by

accountancy, through its IFRS Framework.

The systematic approach in data collection here tends to be quite heterogeneous and

will undoubtedly generate much more qualitative, than quantitative data 19(Saunders et al,

2007). At least in part, I would define this as a managerial control process, where the data

collected will be trans-disciplinary in nature (IT, communication and economics).

17 As opposed to academic propriety, despite being one of the better definitions out there, offered through community debate and support, a design which this paper wholeheartedly promotes. 18

from http://www.frc.org.uk/about/trueandfair.cfm while these guiding principles are difficult to attribute to a particular author, despite being at the center of accounting practices in the UK for a very long period of time, their application is closely monitored by the Financial Reporting Council 19

An similarly framed conclusions

Page 9: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

8

Policy literature review

Policy taxonomy20and structure

Structure is paramount for this subject, as the various policy components have to be

represented within the project, easily identifiable at the analytical level, debatable at the

decision making level and easy to communicate during the agenda setting/implementation

process. By structure, policies generally possess a:

- Purpose statement (almost like an abstract)

- Applicability and scope (allows them to be organized and monitored)

- Effective date of coming into force (except for retroactive policies)

- Responsibilities of key players

- Policy statement

- Background (allows us to understand the policy)

- Glossary of definitions (dictionary)

In its simplest form, the policy analysis model follows these basic steps:

1. Agenda setting (Problem identification)

2. Policy Formulation

3. Adoption

4. Implementation

5. Evaluation

Althaus et al (2007) propose an 8 stage policy analysis cycle (figure 1), based on

“heuristics” and “iteration”, easier to manage than the traditional model presented before,

which is based on the assumption of previous expertize in policy matters21. “A policy cycle is a

first foray into complexity, organizing observations into familiar patterns and so providing a

guide to action.” Unlike the traditional, hegemonic model, theirs considers a “broader range of

actors involved in the policy space that includes civil society organizations, the media,

intellectuals, think tanks or policy research institutes, etc.”

Going beyond the scope of policy generation, the rational planning model RPM (for

systemic/pragmatic process organization) intends to enable the user to attain the best possible

solution, by following a systematic approach in his “heuristic” endeavor. It is not only easily 20 Althaus et. all (2007), my inspiration for the “heuristic” and “iterative” policy generation model. 21

“…professional staff in large government departments. They too were often required to realize significant public policy goals armed only with their disciplinary training and some bureaucratic experience. Even basic civics sometimes proved unfamiliar to those trained as engineers or lawyers. They needed a bridge from technical expertise to the policy domain.” (Althaus et al. 2007)

Page 10: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

9

applicable to policy generation, but also serves to illustrate how the process could be seen

from an input/output perspective, in a relational grid (figure 2), extremely familiar to IT

programmers (Levinson, quoted by King, 2005).

-

Figure 1: The Policy Cycle, which

Althaus et al (2007), describe as a

“heuristic” model

Figure 2: The Rational

Planning Model (Levinson,

2005).

Page 11: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

10

Policy as a sign/signal/code22

Chandler (1995) says that “the conventions of codes represent a social dimension in

semiotics. A code is a set of practices familiar to users of the medium operating within a broad

cultural framework… Society itself depends on the existence of such signifying systems”, then

continues to say that codes aren’t just simple conventions, “but procedural systems of related

conventions which operate in certain domains, which transcend single texts, linking them

together in an interpretative framework”. He then goes on to quote Stephen Heath (1981) in

that “a code is distinguished by its coherence, its homogeneity, its systematicity, in the face of

the heterogeneity of the message, articulated across several codes”. Codes help “simplify

phenomena in order to make it easier to communicate experiences” (Gombrich 1982, 35).

Signs and codes are generated by human institutions and in turn serve to maintain

them, either through self-propagating myths or careful gentle symbolic insemination23.

According to Chandler (1995) the most important issue that concerns modern semiotics is that

we are not merely the slaves of authority generated ideology, but active assigners of meaning,

in the smallest details of our lives. And in that “we transcend that old imperative described by

Sartre in his theory of Being, by not merely being the observer or the observed, but the

painters of our whole universe”.

Enabling a community epistemological24 network25

John C Lilly’s early, simplistic definition (1968) of the “human bio-computer” had a lot

more going on for it than initially thought. By envisioning the mind as a very sophisticated

machine, Lilly has allowed us to take the next logical step and envision society as a network

experience.26 Because of sheer size, the products of this network tend to be vastly superior to

one created by corporate users, provided there’s a network wide demand. Policy does in fact

meet this criteria and the only thing that remains is supplementing the capabilities of the

human network, with an IT architecture that would simplify decision making, allow for easy

visualization and give the sensation of control to the individual user and self-actualization to

the community and many others that were inaccessible before the advent of social platforms.

22

Why codes? As a child in an insecure world of change I have actually attempted to create my very own, highly hermetic code of communication. To put is simply, I was a graphic design child prodigy. Even now, that particular code (which was by no means restricted to the visual and I have jealously guarded/continued to develop), is underlying my every action through its ideological influence and I feel compelled to justify my particular queer existence, through expressing social utility. 23 See the Inception movie 24 knowledge 25

In this day and age where a ridiculous amount of educated people can’t find an application for their abilities, I intend to offer to all those linguistics/foreign relations/arts underemployed individuals a chance to participate in an organized, socially beneficial activity 26

With the commons sense limitations of such mechanicism

Page 12: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

11

The problem is that individuals are not only limited in their individual/communal

capability to process code, but are also subjected to various types of bias, ironically because of

their own heuristic methods, used to mitigate uncertainty and promote the self. An exhaustive

list of such biases has been provided by a mixture of cognitive science/ behavioral psychology

and others and it’s too wide to discuss here27.

In disrupting complex structures, ultimately bias tends to be polarizing, which is why,

we end up with a left and a right for a political spectrum, a choice between 0 and 1, which in

itself represents the simplest programing structure possible in a chaotic network. This yes/no

design needs to be upgraded with non-polarizing ones, such as the Wh+ group of Who?

Where? Why? What? When?.28

There are multiple cultural connotations on a seemingly common denotation. Even the

most natural, culturally well-adjusted term is culture-specific, bringing up personal associations

(ideological, emotional etc.) of the term. These associations are related to the interpreter's

class, age, gender, ethnicity and so on (Wilden 1987, 224). Not only do I want the community I

envision to be able to generate its own code interpretation, I want it capable of

understanding/analyzing overarching and competing codes. At its most basic level, the site

should serve as a very sophisticated tool of policy code breaking/reconstruction, using the best

resource available on the market, the human brain (CAPTCHA case29).

Deconstruction – method/philosophy

As soon as I was able to articulate the title of my initial site iteration30, I began

conscientiously employing the neo-structuralist method of deconstruction to my approach to

27

I will attempt to address the issue of cognitive bias mitigation at a later date in the project, after a better observational understanding of the matter, as I am currently concerned only with structurally inherent bias. Suffice to say that such bias can be mitigated by allowing individuals the IT tools to build communally agreed structures of thought, whether based on individual observation or on communal one. 28

I must point out that the rather imperative yes/no refers to a given choice (often sensitive to compliance), characteristic of hegemonic policy generation, while the Wh+ method forces the research of factuals and gives a much broader range of choices, as characteristic of a diffuse network environment. The Wh+ method is also the preferred one for information gathering in social sciences. 29

A CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a small image used for security purposes to ensure that the end user is human and no software bot is using the web server resources. The problem is that its strength, human recognition is also its weakness, and human code breaking networks have been established by varied means to take them down. http://www.boingboing.net/2004/01/27/solving-and-creating.html a curious future, where commodity pornography, in great quantities, is “used to incent human actors to generate and solve Turing tests like captchas” http://web.archive.org/web/20071106170737/http://ap.google.com/article/ALeqM5jnNrQKxFzt7mPu3DZcP7_UWr8UfwD8SKE6Q80 “Paul Ferguson, network architect at Trend Micro, speculated that spammers might be using the results to write a program to automatically bypass CAPTCHA systems. “I have to hand it to them," Ferguson said, laughing. "The social engineering aspect here is pretty clever.” Maybe I should follow this model to entice users, just as Jimmy Wales from Wikipedia did 30

www.deconstructingeurope.co.cc , with the important word being “deconstruction”

Page 13: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

12

policy analysis. It is my hope that the lowest common denominator will eventually shift from

left-right swings to a deconstructive process.

Deconstruction31 by its scope tends to constitute itself as a challenge to established

structures, and policy symbols and codes of communication are no exception. But being a part

of semiotics, it doesn’t represent a method per se, but a rather a philosophy of approach on

which proper and specific processes must be built for operational efficiency.

Textual medium

The complexities of community/consensual decision making32 at policy level can only

be manifested in today’s society through utilizing the most conventional medium, the textual

one33; it’s simplistic in its component elements (the letters) and yet because of that very

simplicity, able of supporting greatly nuanced textual structures of symbolic meaning.

The textual metafunction34 acts “to form texts, complexes of signs which cohere both

internally and within the context in and for which they were produced” (Kress et al, 1996).

That coherence in itself doesn’t strike anyone as particularly impressive, but it has a

harmonious effect on policy formation as part of a larger corpus35, with the effect of

eliminating dissent as soon as it appears. That might mean that changing policy could very well

entail a concerted effort to engage/change an entire corpus of legislation36.

Textual determinism following adaptive expectations/path dependency 37 is an

inherent part to the way information is processed through such the impersonal textual

medium. Text is highly susceptible to external influence, especially when its repository is under

a singular entity control38.

31

One of my favorite concepts about deconstruction, is that is doesn't challenge anything directly, it attacks the paradigm and them watches as the construct or the needed bits of it, fall on their own (the purpose here is to reconstruct the idea in a more code friendly manner). 32

“complex decision making” as I like to call it 33

From analogy/emotion to digital method. Written convention eliminates the unpredictability of emotions and emphasizes consensus through eliminating the dissent of face to face interpretations. The process of learning further makes choosing the best available alternative as a willing/discovery process that binds support through constructive thinking. 34

Metalanguage/metacodes function in the background at the same time as our main observable code, but are not usually perceived by the untrained user. Gabbay and Hunter (1993) argue that meta-language has the role of reducing inconsistencies in an improperly formulated language. As most codes can only be understood in reference to other superclass/subclass codes, it is essential for policy not to be analyzed merely on its own, at face value. Whenever metalanguage is used to simply plug holes in policy design, we are offered an opportunity/lever to deconstruct/reconstruct said policy up to a point where a better paradigm becomes available. 35 Or through various component definitions. 36 As for example, one of the proposed pathways for legal changes at higher EU levels refers to the challenging of national constitutions, which is as fundamental as you can get. In a sense the rejected EU constitution did just that, attempting to challenge established policy corpuses. 37

Page (2006) 38

“History is written by the winners”, etc.

Page 14: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

13

Code parsing/compiling

Parsing, a term used both in linguistics and computer science, is the splitting of formal

language/code into its smallest units or tokens, which can be used thereafter for syntactic

analysis. In case of policy these tokens will be constituted more or less by terms eventually

included within dictionary taxonomy. Unfortunately, while policy is based on a more

formalized language then common speech, it is still in many ways tributary to natural

language, unlike computer code which operates in a context-free environment. That is why a

number of different policy analytical approaches might be desirable from a code perspective:

- Lexical analysis, splitting the language into morphemes (smallest units possible, in

the case of policy being constituted of concepts, not tokens, of iconic

representations, not symbolic ones).

- Syntactic analysis, where we notice the connectors and the sentence statements,

where we can employ paradigmic and syntagmatic analysis 39on the code syntax

and its other quantitative parameters.

- Semantic analysis, where we integrate the complex data remaining and we create

a full perspective drawing from available taxonomies

Taxonomy

A word that kept repeating itself within my research, until I came up with the

realization that I was working within a field that desperately needed such a tool to unify what

the specialist bias of various fields has hermetically isolated through various interpretations of

the same topic, in protecting particular spheres of influence. Therefore I had to create the

simplest project taxonomy, to be used as a frame of reference and relationships, enabling me

to clearly analyze the code building process, its functionality and finality. Eventually, I intend

for this aspect to become a community managed dictionary.

39

“or” and “and”, the simplest Boolean logical operators

Page 15: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

14

Introduction to knowledge architectures

“Although policy development and enforcement itself is a political or cultural process,

not a technological one, technical systems architecture can be used to determine what policy

opportunities exist by controlling the terms under which information is exchanged, or

applications behave, across systems” (Taipale, 2004). We have seen the difficulties arising from

policy discussion as a subject/code. It’s clearly impossible to run such a complex code only on

a human network40, therefore in this chapter I intend to showcase the theory for a support IT

architecture41.

In the simpler server-client form, matching the Shanon and Weaver original

communication model (1949), Taipale (2004) presents his architecture such as in Figure 3

before.

40

knowledge architectures cannot be upgraded exclusively anymore on pure human social networks 41

while I’m using Taipale’s work (2004-2010) to legitimize and further evidentiate my own, my product isn’t based on his models, as I came onto it rather late in my literature review process . What Taipale (2004) calls Policy Management Architecture, coming from the control/moderate/enforce policy position, I call Knowledge Architecture, a more generic term that emphasizes the need for community education and self-determination.

Fig. 3 Policy management architecture,

client-server reference model (Taipale, 2004)

Page 16: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

15

The difference is that his architecture is distributed across the Internet and doesn’t

attempt to facilitate user feedback and convenience on one branded site structure as is the

case with my plan. The end user is less of a policy generator and more of a receiver, according

to the hegemonic model42.

Of course, one could argue that in his network-stack model (figure 4), Taipale addresses

the issue of user cooperation and leaves out the anonymous originator, but counter-

arguments can be made that:

a. The application layer, with its forums, collaborative work tools, directories, etc. is too

distributive (spread across the Internet into factions and groups) to be able to offer a

consistent alternative to the hegemonic policy generator. 43

b. The audit tools remain once more the prerogative of a limited group, anonymous in its

intentions and presence.

c. The legacy data repositories described are extremely difficult to access for the average

civic user44.

d. No policy code description, despite the author being fully aware of the importance of

syntax/semantics/pragmatics in this style of communication.

42

This simple model allows us to view the main points of contention:

1. It has a distorted feedback loop, with an anonymous source as the originator of policy and in charge of semantic control on the server side, with the end data user required to subscribe

2. There’s an implicit gatekeeper/monitoring element in the oversight logs, which means control of the architecture is not in the hands of the end user

3. It doesn’t address the community user

43 a. Not only that, but it is unlikely that such a distributed layer will be readily accessible to the civic

user, who will once more find himself as merely a receiver of policy, created at a plutocratic level. The

only way the end user can be motivated enough to use this system, is to be legitimized through the

“power of the many”, the social user, which isn’t addressed

44 as they are in non-standard formatting and difficult to visualize. The current design is only accessible to the most educated of policy readers, who from their expert position become in effect the leading policy generating plutocracy.

Page 17: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

16

Introducing the notion of middleware45/appliance

Basically, middleware is the administrative component of the architecture. A layman’s

split into functionality by design, in the context of policy architecture would give us (with the

most popular and invariably “open source” implementations):

1) Scripting languages allow for the automation of the myriad scripts that run in the

background, allowing users to focus on website interactivity.

a. PHP, the most popular server side scripting language, designed for the generation of

dynamic web pages (which allow for user interaction, control and modification of let’s say an

SQL database). Liquid Feedback is written partially in PHP, for standardization purposes (the

backend). The frontend however is written in WebMCP, an rather unknown competitor of PHP,

which allows for enhanced client side features, over the PHP limitations.

45 “In enterprise architecture for systems design, policy appliances are technical control and logging mechanisms to enforce or reconcile policy (systems use) rules and to ensure accountability in information systems… Any form of middleware that manages policy rules -- can mediate between data owners or producers, data aggregators, and data users, and among heterogeneous institutional systems or networks, to enforce, reconcile, and monitor agreed information management policies and laws across system (or between jurisdictions) with divergent information policies or needs. Policy appliances can interact with smart data (data that carries with it contextual relevant terms for its own use), intelligent agents (queries that are self-credentialed, authenticating, or contextually adaptive), or context-aware applications to control information flows, protect security and confidentiality, and maintain privacy. Policy appliances support policy-based information management processes by enabling rules-based processing, selective disclosure, and accountability and oversight.” (Taipale, 2004).

Fig. 4 Policy management architecture,

network-stack reference model (Taipale, 2004)

Page 18: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

17

b. Javascript, a client site used object scripting language, primarily employed because it

enables enhanced interfaces in dynamic pages. The visualization widget Timeglider, employed

here for creating timelines, is a library of Javascripts.

2) Relational databases, SQL in this case is a database query language, with the PostgreSQL

object-relational database management system required by the Liquid Feedback software for

user logs. Postgres is employed by a multitude of scalable architectures such as Yahoo!, Skype,

Reddit, Instagram.

3) Markup languages are simply syntactic conventions of text annotation that can be easily

read by humans, but also employed by programming languages. A true open source repository

will have all its knowledge codified in this manner, as to allow for easy, automatic,

transfer/search/selection of textual code, between various platforms, through API support of

such codes.

a. XML, on which HTML and RSS are built, has become the default format for most

office-productivity tools. The standard is so popular, that in some jurisdictions it has become

the public repository format by default46. Standard formats such as this are also the ultimate

battleground for corporate control47.

b. Json, derived from Javascript, utilized for data serialization, as a more effective

alternative to XML, between a server and a web application. Timeglider uses both XML and

Json, but in different manners, the first for direct HTML representation of exceptions and the

second for a true archiving of data.

4) Server client, Lighttpd, open-source web server optimized for “speed-critical environments”,

potentially allowing for very large numbers of users to connect at the same time (10K per

second max48). Popular user sites such as Youtube or The Pirate Bay use Lighttpd.

5) Operating Systems or OS’s need no real introduction, being a ubiquitous part of daily life.

a. server side, GNU/Linux Debian is the preferred format for server architecture,

because of its low resource utilization. While working mostly through a terminal console, it can

be customized with a Graphical User Interface or GUI. It runs on the Linux kernel, being a

completely free software.

b. client side, Linux, Windows, Mac OS, etc, because the technologies described above

are cross-platform, users are free to engage the Knowledge architecture from a variety of

devices and environments.

46 It is part of the open formats supported in the UK, http://www.cabinetoffice.gov.uk/sites/default/files/resources/open-source.pdf 47 http://arstechnica.com/uncategorized/2008/10/norwegian-standards-body-implodes-over-ooxml-controversy/ where Microsoft pushes its own format of XML, the OOXML, on terms that Richard Stallman, father of open source, strongly protests to http://www.gnu.org/philosophy/no-word-attachments.html 48

Enabling a structural weakness called “DDos deficiency”, which means that on more than 10K connections per second, the server will become inaccessible. “Distributed denial of service attacks” are a common feature nowadays.

Page 19: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

18

6. Multi-paradigm programming languages, such as C and C++ which are used for programming

most of the other languages, programs and also serve as a basis for hardware design. They

provide an indirect link to the mathematical syntax that enables very complex IT processes.

By their position within the general system, Taipale also identifies:

A. Data enabling tools. Some other examples of middleware include “analytic filters,

contextual search, semantic programs, labeling and wrapper tools”, content personalization,

B. Data restrictioning tools 49 “for selective disclosure such as Digital rights management DRM,

anonymization, subscription and publishing tools”.

C. Safety features, such as “technologies for accountability and oversight 50 include

authentication, authorization, immutable and non-repudiable logging, and audit tools, among

others” (Taipale, 2004).

Open source51versus closed repositories

In his model Taipale advertises the use of protected data repositories. What he forgets

to mention is that security models can and do interfere with ease of access in even monitoring

policy generation. This paper wouldn’t have been possible without open source. Not only did it

rely heavily on open source infrastructure (such as Debian OS, Liquid Feedback PHP, Lua and C,

Timeglider Javascript, HTML, XML and JSON, etc, etc), but it also relied on free knowledge

repositories, starting with the ubiquitous Wikipedia (for very fast topic identification) to Github

and stackoverflow.com (software).

While outside power depends on level of human organization, conceptual power on

the net is more about standards settings. In the open source model, one does not control the

outcome, just creates the tools and tries to work with the emerging community to develop

them. This was the case for every single major project since Richard Stallman, created the

“open source” concept. It’s not a perfect concept, as I recently had the chance to observe,

49

Which personally I don’t recognize as part of Knowledge Architecture, but are a definitive feature of a Policy Management Architecture 50

While I don’t disagree with Taipale on the need for security, some of the language he uses makes me cringe, such as “international and national information policy and law will be reliant on technical means of enforcement and accountability through policy appliances and supra-systems authorities“. Still the same author goes on to say that “control and accountability over policy appliances between competing systems is becoming a key determinant in policy implementation and enforcement, and will continue to be subject to ongoing international and national political, corporate and bureaucratic struggle. Transparency, together with immutable and non-repudiable logs, are necessary to ensure accountability and compliance for both political, operational and civil liberties policy needs… The development, implementation, and control of these mechanisms – as well as the development of the governing policies – must be subject to wide-ranging public discourse, understanding, and ultimately consensus” (Taipale, 2004). 51

Both open source information and software.

Page 20: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

19

when a sufficient developer community failed to form around Timeglider and the makers

pulled it out of the MIT license52.

The web browser is an integral part of the knowledge structure, acting as semantic

selector at web level, just as the policy topic search will be for the LQ platform and the

integrated search and legend functions are for Timeglider - 3 levels of search into our

aggregated data (Web, website, timeline) just for visualization53. However, as soon as one

starts manually indexing information, he realizes an obvious limitation of the browser search

function, as it cannot search structurally non-indexable data - the “deep web” (such as

dynamic pages, locked repositories or simply poorly downloadable content, like the

excruciating amount of PDF’s the EU institutions post online54) and the “no web” (data that

was never meant to be shared in public, such as the one available only through the Freedom of

Information acts55). It’s hard, grunt work, which requires the work of many to put into proper

context56.

Because of the nature of my work, I was spared having to use protected data

repositories57. If I can’t actively link to them from my website, they’re useless, dead and hard

52

Refer to my communication with the Timeglider architect, Michael Richardson, where he points out that “The major users of the widget did very little to contribute to code or documentation. People are accustomed to libraries being available, and don't realize that open source is by necessity a community effort… Some developers simply took our widget and (without any real contributions to code) began building businesses around it — apps that were very similar to Timeglider” (Annex 1). Also as you can see, unlike the academic explanation, it only took me only 6 paragraphs to explain the idea to a concept/programmer/domain developer, because of his previous involvement with the public domain and similarity of business development. How can we call ourselves economists when the basic tool of our trade, information, namely of an economic nature, evades us? I'm choosing to interpret his last statement as a confirmation of support, so I'm going ahead with using his widget. I believe that his genuine intent was to offer a product that contributed to projects such as mine, so in that sense there’s a degree of satisfaction for the Timeglider author in associating their name with my concept. 53

There are also a series of semi-automatic search functions, such as code validators, for example http://jsonformatter.curiousconcept.com/ or the various other integrated proprietary functions for editing spreadsheets and text documents. 54

It’s almost like governments intentionally make data hard to index, since anyone building a webpage knows what search engine optimization is. There just doesn’t seem to be a coherent policy for end-user information. Is almost as the end-user is relegated to second string, in the care of his higher entities, nation-states. EU institutions are too important to deal with the end-user directly, even thou their constitutional chart obliges them to do so. The data gathering through manual web crawling was incredibly boring, being made even more frustrating by the intentionally inserted limited features, but we need to move away from the generic Wikipedia approaches and create specialized repositories for socio-economic policy analysis. 55

Most of this data was by design never meant to be found. It’s strange how from my original concept and intent, I find fundamental interoperability problems at the level of the socio-economic information structure, which is considered to be critical to both the information of the political unit (the citizen) and the functionality of the larger international/union network. Hiding information beyond a façade of protectionism is pointless and it shows how little the bureaucratic mind really knows about information. 56 Compare this inefficiency with a modern search engine - Without an index, the search engine would scan every document in the corpus, which would require considerable time and computing power. For example, while an index of 10,000 documents can be queried within milliseconds, a sequential scan of every word in 10,000 large documents could take hours. Part of my idea is to have the community do the scanning work for all, so that the individual search can be solved through an easy visualization search. 57

Open source and knowledge initiatives tend to be very easy to access, by design and intent

Page 21: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

20

to upgrade pieces of information, becoming more obsolete as time passes. On a site that

should be accessible, like a governmental one, not upgrading info is a cue, either to

indifference 58or to lack of funding. But in closed repositories, the design forces data to

become obsolete, so why contribute to such a process? Ideally, the information flow should

follow:

Knowledge repositories – search – exposure – critique – debate – decision making – recording

rehydrating/enhancing/adding value to information

58 I quote from Rick Falkvinge, the creator of The Pirate Party “Almost all the world’s new creators are already working in the new paradigm; creating despite the copyright monopoly, rather than because of it… those laws can and will change as the 250 million Europeans who share [a free information] culture come into power. 250 million people is not an adolescence problem; it is a power base of 250 million voters. As these people start writing laws, they can and will kill those monopolies at the stroke of a pen.” As this declaration by Eric Kriss, Secretary of Administration and Finance in Massachusetts, proves "It is an overriding imperative of the American democratic system that we cannot have our public documents locked up in some kind of proprietary format, perhaps unreadable in the future, or subject to a proprietary system license that restricts access" things are moving there already (2005).

Hey there Paul

I really appreciate your email. I completely sympathize with your overall feeling regarding the open-source world, and our decision to "close" the source down to less-than "open source" licenses. I think once Timeglider is more established, we will be able to afford to open-source our core widget. For a year, I did have the widget out there under the MIT license. Basically: • The major users of the widget did very little to contribute to code or documentation. People are accustomed to libraries being available, and don't realize that "open source" is by necessity a community effort. Only a couple generous developers provided feedback or actual code — amounting to maybe a dozen lines of code. • Some developers simply took our widget and (without any real contributions to code) began building businesses around it — apps that were very similar to Timeglider. A company offering the core of its software as an open source widget has to be in a very strong position — and has to see many benefits in doing so. We definitely want to grow to that point, but meanwhile, we need to throttle the license. We still have a lot of companies developing with the widget, both commercial and non-commercial. Good luck with LiquidFeedback: it seems like a very cool project. Cheers, Michael

•••••••••••••••••••••••••••••• co-founder, lead developer www.timeglider.com [email protected] cell 208.850.8512 twitter @timeglider ••••••••••••••••••••••••••••••

On Sat, Aug 18, 2012 at 11:53 AM, Paul Suciu <[email protected]> wrote: Hi, Michael, While browsing the net like every penniless user, first I saw this https://timeglider.com/jquery/?p=intro and thought to myself what an excellent user friendly idea. Beautiful software, free of charge and excellent explanation. Then I saw this http://timeglider.com/how_it_works.php and I'm sure others have explained my disappointment. It is only fair that you guys should profit from your work, however was there no other way to do this than going fully proprietary? As a regular user, I became horrified when Youtube introduced compulsory commercials, more so when Google Maps announced it was going to charge a fee (only to industrial users, but still). Microsoft is locking Windows 8 with its software store and Facebook is flooding people with crap

Fig. 5 A copy of the 19 August

2012 e-mail conversation with

the Timeglider lead concept

architect, Michael Richardson,

which sees me morph into an

open source activist somehow.

Page 22: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

21

Liquid feedback

It is not often that one is terribly involved in a conceptual process taking months only

to find out that the ideas articulated within have already taken shape. Whilst on my own I had

started to realize that it was possible to articulate social change by means of highly

interactive/dynamic web pages that facilitate user control and group consensus (such as

through the ubiquitous PHP, the definitive public community technology), a German team

already had a viable project in the pipes since the second half of 201059.

Claude Lévi-Strauss said that “the process of creating something is not a matter of the

calculated choice and use of whatever materials are technically best-adapted to a clearly

predetermined purpose, but rather it involves a dialogue with the materials and means of

execution” (Lévi-Strauss 1974, 29). What about using materials that were made by a third

party; especially in the case of such a complex process such as policy analysis/transfer? Well,

undoubtedly the design choices 60and the implicit purpose of Liquidfeedback have had a

significant impact on the way I chose to design my own project, as the software “speaks” from

a position of authority/benchmarking in respect to IT enabled decision making.

In a hegemonic dominance stability system, you have a top to bottom policy

generation model and the IT architecture will reflect that, as in Taipale’s case. But with the

recession hitting and the breakdown of faith in stability in the hegemon, the dependent

individuals/citizens will become independent decision makers61. We must remember however,

that we had a symbiotic collaboration with the now weakened hegemon, which will move to

restore the status quo (restrict the network ability to generate policy, as we can see in a series

of modern pieces of legislation at global level62), therefore it is essential to move from the

simpler social networks (trend setters) to specialized ones that permit the expression of crowd

policy at such a level of quality that it begins to alter the hegemon’s paradigm63. One must not

understand the hegemon as an enemy, rather than a structure with an established usage

59

The reason that I had missed it was partly because the project was in German, directed towards a German audience and partly because the academic environment moves at a crawl when compared to the speed new technologies appear and should be diffused. The traditional approach to research takes forever, and as we have seen from my earlier messenger lists/e-mail/e-forum investigation, it takes about two years for stuff to sink in and papers to be published. Even now the group barely publishes anything in English besides some introductory stuff. 60

The team behind the platform has experience in “data base solutions: like enterprise resource and planning, point of sale applications, reservation systems” (Nitsche, 2012). 61 willing to organize themselves adhoc (we notice a rise in entrepreneurship, due to social/personal necessity in periods of crisis, after the failure of the social contract) into the simplest and most convenient form, that of an amorphous network, which can begin to organize and generate its own policy. 62 http://www.forbes.com/sites/larrydownes/2012/08/09/why-the-un-is-trying-to-take-over-the-internet/2/ 63

by that I mean what the LQ platform intends to do, to find social levers within established political institutions, especially in high influence ones, such as parties, where of course they will begin with the easier to influence members, the lower castes

Page 23: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

22

offering opportunities and challenges. That is why it’s essential to do two things to improve

individual control:

- Enhance the quality/quantity of his decision making process. 64

- Enhance the reach of his decision making process65,

That is where PHP enabled participatory platforms such as PhpBB and Liquid feedback

come into play. Through the mechanism of a shared decision making, we can build a

community of intent. We must however distinguish between crowd of intent66 (the starting

point) and community of knowledge (the middle point) as two different facets of what we are

attempting to steer towards our goal of policy generation 67(production).

The ultimate goal would be to build a community that can formulate not only its goals

by means of this website, but also new pathways of action, such as educating its own agents of

change68, after replacing the anonymous policy generating user with a community think thank

policy generating user, which emphasizes participation and ultimately possesses civic

legitimacy through self-representation69.

Proxy voting (Fig. 10) with a Schulze method 70for preferential voting (LF Annex, Fig G)

is the precise mechanism this representation is achieved in LF. “Transitive proxy… was first

suggested in internet forums in the United States… around the year 2000. Back in 2009 the

growing Berlin Pirate Party wanted to perpetuate the chances for every party member to

participate in both the development of ideas and decisions. And they thought transitive proxy

voting could be a promising idea.” From there the team started a “democratic proposition

development process and preferential voting” (Nitsche, 2012).

64 This can be satisfied my either providing the individual with higher quality/volume data input (to create his own opinion) or to expose him to higher quality/volumes data structures (community work), which he can adopt or enhance (through debate and research). 65

again this can be done in a simple manner, by enhancing the penetrating power of his decision, either by creating a front of action, through association, through creating the right context for diffusion of his idea, if valuable and through allowing direct interference over the agenda setting policy activities. 66

Freud, mass psychology 67

Intent without knowledge is blind and knowledge without intent is lame. 68

Fish (1980) called this “interpretative community” 69

Talking from an analogous position, let me put it this way, if Liquidfeedback is the scalpel able to cut through the tissue of society, then my addendum should function as its eyes, and give it the ability to identify the best spot for an operation without leaving a gap. This community should be able to learn and self-actualize itself continuously and as I mentioned before, a blind community of intent is insufficient for policy analysis. 70 Clone proof Schwartz Sequential Dropping (the Schulze method) allows for the expressing of preferences (when favorite doesn’t win vote moves to another preference). This is done to ensure that the user vote counts and that votes do not get wasted, by variations of the same idea that exclude each other from the top position. Apparently the Schulze method algorithm is vulnerable to an instability that generates a less desirable outcome for players, in case of a majority of strategic voting. While I could go into Game Theory further, I will restrict myself at mentioning that it’s important to mention that by improving decision making in a community of intent, by upgrading its capabilities through a community of knowledge, we might be able to reduce that particular vulnerability.

Page 24: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

23

Current users of the platform include:

a. The German Pirate Party (political party, 7% Germany representation) - for

strengthening the power of individual members in decision making71

b. Friesland (formal region of Germany) – wants to adopt the platform for civic

representation and are currently working with the LF team to modify it72.

c. Slow Food Germany 73(ONG) for managing internal policy

d. Private instances such as the one I aim to set-up74.

71

“There are differences between the chapters when it comes to actual usage. In Berlin LF is part of the statutes which underlines the importance for the Berlin Pirates – while – on the federal level the function is less defined. Apparently (on the federal level) LF helps the members to get an idea of which propositions can get a majority within the party. Some board members declared they decide based in LF results and LF seems to be helpful for the preparation of party conventions as many propositions are prediscussed and maybe enhanced in LF.” 72

That involves “sync the timing of certain initiatives in LiquidFeedback to the political processes” (Nitsche, 2012). Already the platform has spread to the Netherlands and US and has been translated in over 10 languages. 73

The unusual thing is that with the platform starting as a political representation from multiple sources and competing platforms such as Adhocracy (not open source, emerging as LF forces change in competing entities) emerging, should the system actually catch on I envision a period of competition, congruous with Taipale’s (2004) “subject to ongoing international and national political, corporate and bureaucratic struggle”. 74

Thou I don’t have any excessive expectations, either pragmatically or ideologically.

Fig. 10 Proxy voting

representation - behind the

punctuated line are the direct

vote proxies and individuals

(GNU license image, Wikipedia)

Page 25: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

24

Liquidfeedback preexisting functionality75

As mentioned before, the project conceptual/building process and its ultimate

functionality are intimately and inexorably linked. Liquidfeedback was built on top of a/to

support a fast peer feedback loop76, which is why it’s biased towards speedy efficiency77.

My approach stems from an academic/theoretical perspective, raising the need for the

improvement of public policy discourse. It matches the political side only in respect of

generating a legitimizing political experience.

Also, unlike the established PhpBB software discussed before, Liquid Feedback is a

rather recent addition to open source and it’s increased versatility comes at the cost of

working with an insufficiently tested software. This raises the issue of security/accountability,

especially for a platform already employed in civic representation (Friesland). For a full

functionality disclosure, see the attached annex or the demo website at

http://dev.liquidfeedback.org/lf2/index/index.html (direct link) or http://liquidfeedback.org

(foundation link).

The typical LF user level can:

- start an initiative (proposal), that becomes an issue (to which people can add other

proposals). When one proposal in the issue reaches a certain quorum, it is considered

worthy of further debate and moved to the top of the discussion list. Then there’s

another period of debate, after which users vote on the most popular initiatives (as

the ones with little representation get eliminated from the voting process to save

users time). establish an agenda

- support existing initiatives (counting acceptance)

- suggest enhancements for existing initiatives (syntagmic debate) analysis through

debate

- start alternative initiatives (paradigmic debate)

- vote on all available alternatives in the end of the process

- transfer his vote to make it count for his own wing in a given issue decision

making

- can delegate authority of vote/debate to a delegation which “can be seen as a

transferrable power of attorney for both the discussion process and the final voting.

The delegation can be made by unit, area or issue and be revoked at any time.

Regardless of existing delegations a member can participate in a discussion and/or the

75 Further details in the LF annex 76 That tends to generate Twitter style opinions 77 So fast in fact that it risks eliminating essential issues and it vulgarizing the policy analysis process. It can easily simply become just another arena for rapid information exchange and no majority critical thinking. If I can persuade you with my wit, I got your vote, regardless that the issue has found a satisfactory finality or not. It is a mirroring of party politics, and it was designed to serve the interests of a real life party. It deals in pragmatism.

Page 26: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

25

voting which disables the delegation for the given activity. A proxy cannot vote in the

presence of the principal” (Nitsche, 2012).

Security - Member access registration is restricted:

- by code (password invitation) that defines only one virtual instance for an individual.

The control password can be withdrawn to members who leave the group/party, etc.

- civic registration, on which the developers are currently working that would assign

individual document data to the unique virtual persona to guarantee only one vote per

person.

By reducing moderator control, the creators have put a lot of the responsibility in the

hands of the community to self-moderate itself. This is done in hope that the crowd control of

policy generation at site level will ensure accountability, trust and system legitimacy78. This has

been done before, although not intentionally I suspect, but as an emerging process in

insufficiently defined frameworks (such as the original Usenet79).

The program is designed to operate even in the case of non-collaborative crowds, that

refuse delegation, by automatically sorting out policy alternatives (and eliminating the least

preferred ones) - “large groups with real conflicts using strict rules in a predefined process

without moderator interference.”

Self-moderation protocols include:

A. The “bubbling system80” of allowing the best topics to take top billing insures that

the alternatives appear in order of popularity.

B. Transparency insures individual user accountability

C. The “Proxy system” ensures system moderation, as their support for any initiative

will likely bring it on top and to be a good proxy, one must be an implicit moderate

for a category. The system even has an expiry date for “dead user” support.

In conclusion Liquidfeedback is:

- Offering functionality that matches our intent (especially the proxy voting/Schulze

algorithms).

- Solving theoretical/pragmatic middleware questions

78

Again, it’s a feature that came about by design. In following the philosophy of getting the community to decide for itself in the real world, they went a step further and ensured self-determination at site level (in a mirroring of the real life process). 79 User moderation protocols have been detected as Gaming theory has been employed to study politics through the use of elaborate Usenet simulations and the precursor of the modern forum has shown three different behaviors in conversations: - Unregulated environment, anarchy 4chan style - Regulated environment, protocol - Self regulated, netiquette FB style 80

Similar to Reddit’s system for topic ranking by community vote

Page 27: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

26

- Denaturizing our intent, because of design/target intent (a rapid decision making

platform for a political party)

- Not offering essential knowledge functions, forcing API and community protocol

developments

- Still just a base, an incomplete design of bare bones81, which requires a target

audience

I’m now going to assume that the accountability/security and decision making

processes for my knowledge architecture are covered through the LF platform’s middleware

and move on to describe the knowledge enhancing processes I propose for the policy

generation process.

Knowledge process modification proposals for the LF platform

Through the facilitation of LF we have our community/crowd of intent. Now what we

must do is support this community with the necessary tools as to also turn it into a community

of knowledge. Knowledge and intent are what assures us of producing a quality final item -

policy. Once again, while the LF software offers a trove of opportunity to the political

individual, for the academic researcher is a rather poor proposition, as the level of

communication is no better than on any other forum and the individual users might feel

delegitimized by being corralled through the 10% quorum (for issue to initiative upgrade of

proposals) and the 50% of quorum (validity of winning issues) requirements.

Complexity is definitely an issue here, as most users are used to either social

interaction (Facebook), trend following (Twitter, Yahoo) or exposure articles (Wikipedia) in

respect to topics of interest. What I don’t want to do is stifle the creativity of a few by enabling

too much moderation82. Imagine a site that grows in complexity not just on a linear fashion,

but in a network manner that aims to harness specific processes of the human mind83. I also

wish to avoid having untrained individuals lose themselves in a bad process84, and create a

new plutocracy of those that can adapt versus the average user.

The means to achieve the desired enhanced platform functionality for the

Liquidfeedback decision making software is by using the new Application Programming

82

forums become more and more complex and require similarly comprehensive forms of analysis, we notice either tighter control on the discussion topic or quality degeneration of the discourse. 83

Twitter for example has such a bad opinion of an individual’s capacity for retention, that it has restricted its feed messages to 150 characters. After that, the average user is considered spent without further input and the building of vertical content, which reactualises itself. Since the format is very popular, they were definitely right. Wikipedia on the other hand has no such issues, developing a horizontal model aimed at the rare encyclopedic user or much more often at the occasional user (likely student). While it has a vertical process of peer review, that is restricted to the editorial side and never seen by its regular users. By inference, one should suppose that the minds behind twitter do in fact utilize complicated horizontal forms of communication that they do not in fact make accessible to the wide public. 84

Jonassen (1997) stresses that "well-structured learning environments” are useful for learners of all abilities, while "ill-structured environments” are only useful to advanced learners.

Page 28: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

27

Interface85, which LF supports with release 2.0 86and by developing specific standards of

presentation/community supported protocols within the platform. The additional functions for

enhancing Liquidfeedback that I propose are87:

A. Enhanced visualization – timeline (preventive role)

B. Enhanced visualization – exposure draft protocol (imperative role)

C. Semantic search – better search function

D. Semantic clarity – dictionary

E. Enhanced search – elapsed topics tree structure

F. Peer-to-peer communication – direct messaging window

G. Proxy suggestion box - through the direct message system.

H. Community-to-peer communication – RSS feed window

I. Community creation – enabling circles

J. Generally enhancing user profile with vote statistics, historic, etc.

Visualization88

The need for visually representing knowledge is nothing new and attempts have been

made over the years to improve IT platforms in all sorts of technological experiments. Without

realizing it, you, as the end user, are currently enjoying some of the best social/commercial

designs out there.

Visual IT representation 89started when in the 1980’s when “SemNet produced three-

dimensional graphic representations of large knowledge bases to help users grasp complex

relationships involved. The design of SemNet focuses on the graphical representations of three

types of components: identification of individual elements in a large knowledge base, the

relative position of an element within a network context, and explicit relationships between

elements” (Chen, 2002). The Internet representations of .net domains became legendary and

inspired a whole range of consumer accessible relational representations such as this

Facebook module that allowed me to see my FB social circle90.

The first step in pragmatic policy analysis is to identify a problem. This is quite easy if

the problem is urgent or imperative, but you wouldn’t want every single issue you deal with to 85

Through the API interface, other pieces of software can be connected (with some programming). http://dev.liquidfeedback.org/trac/lf/wiki/API 86

Support is also provided at official developer level, by registration here http://apitest.liquidfeedback.org:25520/ 87

None of this is truly original, but why reinvent the wheel. The most common of these functions will likely be addressed by the larger community, while I focus on the extra visualization ones. 88 "Visual representations and interaction techniques take advantage of the human eye’s broad bandwidth pathway into the mind to allow users to see, explore, and understand large amounts of information at once. Information visualization focused on the creation of approaches for conveying abstract information in intuitive ways." (Thomas and Cook, 2005) 89 based on a series of mathematical models for knowledge representation such as Information Retrieval Models, Bayesian Theory, Shannon’s Information Theory (I’m familiar with it from my Communication BA), Condensation Clustering Values (with which I’m familiar from my previous clustering research) etc. (Chen, 2002). 90

http://mashable.com/2009/08/21/gorgeous-facebook-visualizations/ by no means the only option

Page 29: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

28

become an urgent matter, for lack of being addressed. You would want to be prepared and

therein lays the difficulty of identifying your preparedness for a potential future issue in a

massive corpus of insufficiently formulated policies.

So how do you identify a problem? It seems that in the Twitter era the first guy to yell

“fire” is the problem finder and then the whole heard will either run towards there with

opinions and as far away physically as possible. But it is better to prevent than to cure, that is

why the ability to monitor a situation is essential. That is why a good visualization charged with

as much data and metadata possible, yet easy to use and convenient is desirable.

Common wisdom says that 2 brains are better than one, but how about 20 million, 200

million? Of course, the software I’m working on can only hope to contend with users on the

order of 5-20K, yet still even these numbers are vastly superior to the limited commissions set

up nowadays to identify policy issues and set agendas. Especially, when these tend to be the

formed of highly subjective individuals, with a personal degree of interest in the matters they

supervise. Crowd policy monitoring is my attempt to popularizing a higher level of policy

awareness as opposed to mere opinion “bubbling” expressed through Reddit and Twitter.

Fig. 11 My Facebook social

circle representation

courtesy of the

MyFnetwork application.

The clusters are middle

school, high school, BA,

my first MA, various

countries and jobs

Page 30: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

29

“Information visualization, or in other words, visual data analysis, is the one that relies

most on the cognitive skills of human analysts, and allows the discovery of unstructured

actionable insights that are limited only by human imagination and creativity. The analyst does

not have to learn any sophisticated methods to be able to interpret the visualizations of the

data. Information visualization is also a hypothesis generation scheme, which can be, and is

typically followed by more analytical or formal analysis, such as statistical hypothesis testing.”

(Anonymous Wikipedia editor91)

Timelines92

Chandler (1995) recommends as the best method of text analysis the “detailed

comparison and contrast of paired texts dealing with a similar topic” according to

syntagmic/paradigmic principles. But representing a policy matter requires more than mere

narrative or tree structures. It requires consideration of overarching (corpus of policies) and

underlying issues (token component analysis, definitions), the ability to move into detail (ED’s,

abstracts and links) a time dimension etc.

Enter the most intuitive93tool for complex/time dependent issue visualization and

comparison – the timeline, a complex structure over which interested individuals can browse

and identify faults or opportunities for improvement. Simply put, timelines allow for the

ordering of more of every type of data within the same seaming visual field. For example, on

your PC monitor you might be able to have a few paragraphs and a couple of topic titles at the

same time. On a timeline, you’ll have 100-1000 topic titles at the same time, arranged in a

time fashion, with various visual cues and colors for easy identification. Simply put,

comparability at community level is enhanced because:

- Comprehensive topic visualization, volume, color codes, font choices, etc.

- Preservation the temporal value of data, customarily lost when data is shown in an ED

format, with the preservation of semantic and observable connectors which allow am

insightful user out of the thousands watching to raise an issue before it happens

- Logarithmic timeliness include an additional parameter, that of information novelty,

which means a dilatation of time as we move from present, both in past and future,

with less detail being exposed for the past and less prognosis for the future94.

91 http://en.wikipedia.org/wiki/Information_visualization#cite_note-3 92 origin idea came to me from Encarta Encyclopedia where human society was structured historically according to selectable topics (everything that the current Timeglider can do), thou the idea came too late and Encarta was eliminated by Wikipedia, who has yet to implement such a system. 93

Historically speaking there are some fascinating examples as shown at http://www.cabinetmagazine.org/issues/13/timelines.php 94

a level of sophistication that our current Timeglider software doesn’t support, but indirectly through resizing

Page 31: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

30

- Creation of a usage protocol, such as expectations of format (every topic having a body

of text)

- The creation of a dictionary, as interpretation is paramount with many internationally

reaching laws/agreements seem to lack it

Also, timelines because of the sheer volume of condensed, community linked data

offer an unexpected opportunity to site growth through harmonization of intent with the

greater network that the site is a part of95. “The PageRank algorithm instead analyzes human-

generated links assuming that web pages linked from many important pages are themselves

likely to be important. The algorithm computes a recursive score for pages, based on the

weighted sum of the PageRanks of the pages linking to them. PageRank is thought to correlate

well with human concepts of importance.”

As we can see the current practical implementation of link technology and analysis96, is

not based on academic standards of citation, but on a user requirements need, of where policy

organization is part. Time to move from the 20 citations author model to the 2000 free

internet links model and beyond, to the community enabled 2 million one, in an attempt to

create original and genuine solutions for pragmatic problems, which the sooo detached

theoreticians seems to constantly ignore until one of them brilliantly points out a paradigm

shift (in simply stating the imperative obvious).

Exposure draft 97

The first issue that jumped to my mind when observing the LF functionality was the

rather poor interpretation of specific issues offered by the end users, in a complete

misunderstanding of a policy operational steps. Issues were being proposed by people with

good intentions, but without the necessary ability to articulate them. As such these issues

were the result of an opinion, which similarly inclined individuals will likely follow without

giving thought to a proper solution.

This type of patch work solution that presents itself seems to be a direct result of

imperative, immediate events, situations that bridge into real social problems that the IT

community feels obligated to address. But obviously a community cannot be managed only

through imperative direction.

95 WWW topic input and providing links/outside imagery (hopefully of the updatable type) – Module Abstracts – Topic names – Timeline name for initial input into the human browser, after which the human browser will reverse this trend and in deepening knowledge will enhance the existing information loop, at the same time others do. 96

Link analysis is a subset of network analysis and provides the relationships and associations between very many objects of different types that would be impossible to observe from isolated text pieces. 97

Presentation of an item of policy for the public. IFRS terminology, thou the original idea came to me from traditional encyclopedias and academic journals article presentation

Page 32: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

31

The ultimate model of exposure draft presentation has to be the Wikipedia model, in

itself a massive repository of Exposure Drafts and model of development. This ED should be

joined by a critical assessment tool/commentary such as the commentary option offered by

the MS Office tool for text, which would offer the community the chance to amend the text of

a proposal with suggestions.98

Dictionary99

The existence of a dictionary100 binds people to a shared understanding and stops

individuals at the semantic level of discussion when the terms do not coincide, eliminating

dissent at later stages.

Ensuring consistency of approach – In natural language processing, semantic

compression is a process of compacting a lexicon used to build a textual document (or a set of

documents) by reducing language heterogeneity, while maintaining text semantics. As a result,

the same ideas can be represented using a smaller set of words. Semantic compression is

advantageous in information retrieval tasks, improving their effectiveness (in terms of both

precision and recall). This is due to more precise descriptors (reduced effect of language

diversity – limited language redundancy, a step towards controlled dictionary)” (Ceglarek et.

all, 2010). Topic delimitation is critical as semantic incongruence can lead to a never ending

amount of debate between individuals who share complementary negotiating positions.

Even with community support the amount of work in operating with the taxonomies is

so large that I hope it is possible to utilize some preexisting conditions, in the form of web

dictionaries. There just has to be a community accord on the exact definition and optimal

dimensions of it. As of now, this particular topic requires a further investigation.

98

original idea, Microsoft Word, because it’s an awesome idea inspired from editorial reviews. While Liquidfeedback has a suggestion system, it seems limited by comparison 99

original idea, from the law, where everything has to be explained in detail to avoid litigation, where the formal language code creates convergence and enforces uniformity and consistency. 100

which should include tools and protocol descriptions

Fig.12 An example of the

Word comment function

Page 33: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

32

The sortable/search function at site level has to be improved as to allow for proper

topic selection, be they expressed through the ED, the timeline or the dictionary. Some

potential ideas might include topic tree navigation, thumbnail selections (commercial site

style), better semantic tags for identifying policies (which could possibly enable an

automatically tag populated timeline), individual user and circle search.

Site brand

To enhance end user platform experience and confer him an identity we must link him

to a community with a clear purpose/image. From the possible implementations of the

platform, I think the only private one I can manage at this point is the Think Tank

implementation (quite weak and ineffectual, self-defeating101model from what I can imagine

right now), as I hardly possess the legitimacy to create a civic representation website102.

Tying user virtual identity to the external institutional one is of great concern to LF as

the platform, through the processes it seeks to moderate, is not interested in the individual

user per se, but in the institutional side of said individual. The collected ID’s need to be

homogenous, for reference and comparability, which already means clear belonging to an

outside group (citizen, party member, etc). It’s a feature enforced by design functionality,

which means we are not dealing with a fully scalable social network, but rather with a “mirror”

network. This limitation is essential for purpose definition, unlike in my original idea, where

knowledge acquisition was intended to support any and all users. A very different branding

system is required here.

Therefore, while in my original proposal I was already envisioning a web domain103, as I

got closer to understanding the LD platform and my modifications, I have moved towards

creating a technical instance only. As I want to create a proper website I must raise the

question of how can I safely identify and attract the most suited users to create the critical

network mass 104that the process requires. It's a question I'm definitely looking forward to

answering will be asking in my own search for PhD support105.

101

It would end up as a heavily loaded version of Reddit, by losing its USP, the decision making model (without pragmatic effects, it would just be an impotent tool). 102

What I can hope for in the long run is that my theoretical work will be recognized by the emerging developer community and I/others will end up implementing some of my ideas at some constituency/institutional level. That implementation would also help support the general LF platform which, in its current form, desperately requires academic validation (before its flaws lead to it being publicly dismantled by on the hire theoreticians). 103 Because of the LF decision making process, my original branding proposal, deconstructingeurope.com would have to be modified. 104 The community/website needs enough users (nodes) to grow above a certain critical mass/cost breakeven point of production/connectivity if it is to reap the benefits of my added modifications. “Classical economies of scale are on the production side, while network effects arise on the demand side [percentage of online users from the intended institution].” 105

How will the ultimate product address the Knowledge architecture process?

Page 34: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

33

I would also propose the adoption of a popular site ethos, represented by general

principles, such as Joshua’s 7 laws of economics106. These would definitely need a bit of

cleaning and likely subjecting to community accord, as they tend to constitute themselves in a

sort of constituting document of the website.

Attracting users to the platform should also be pursued through

acceptance/naturalism of the code. While there’s a need to make policy visible from a

deconstruction perspective, there’s also a need to make the process of analysis/debate highly

natural, organic, not felt by its users, that is to teach people and then make the method

invisible, fully accepted. Making the interactional process as smooth as possible will allow our

initiative not to break character and be accepted by the user, at least in its technical/design

implementation.

Enabling outside reach – because the target users are highly politicized

individuals107we must ensure they can exercise a resemblance of executive power, after

consensual decision making108.

106 http://www.joshuakennon.com/joshua-kennons-seven-laws-of-fair-economics/ 107 Which regardless of their ability would give their time to engage political subjects 108 Linking the LF platform to a party would have ensured that by default, but options must be found for a Think Tank system. Some means to achieve this would be: - social site integration to popularize subjects - group wide initiatives such as online petitions, even nominating physical delegates/ activists - eventually the community should be able to develop a member support system (scholarships, bursaries) that would go as far as to train individuals for particular roles (community lawyers, etc.).

Page 35: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

34

Page 36: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

35

The project work logs

Project calendar109

1) Some of the stages that I had originally envisioned were:

a. supporting the vision with theoretical support, (January-February), Research

proposal

b. defining the concept (March)

c. implementing the concept (April to July)

d. describing the project academically (August to September 17)

2) However, the actual calendar turned out to be:

a. supporting the vision with theoretical support (January-February), Research

proposal

b. defining the concept (March)

c. finding that the concept was insufficient after more literature review (April)

d. researching the LF platform and realizing a 2.0 version was to be launched in 28

June (May)

e. more Knowledge Theory research and ACL leg surgery (end of May)

f. waiting until 28 of June for LF 2.0 launch (English and API features, June)

g. monitoring LF functionality and refining the Knowledge Network concept (July)

h. [not so] friend comes to live at my place for free and bugs me for two months

(July-August)

i. the LF platform functionality comes at the price of difficulty of install (July)

j. after another 5K master and surgery period, I’m broke and have to start to a full

job (August)

k. start implementing the theory for Timelines, working on Timeglider (August)

l. start putting together a paper from 100K words spread across various notes

(September)

3) At the present time, the next period of implementation should be:

m. finalize the LF implementation on new server (September)

n. finalize API programming and testing on server instance (October)

o. create site brand and refine social protocols (November-December) 109 From the start I had an unclear time limit for the complete project implementation as it required a number of very different stages, with which I was unfamiliar. For the original PhpBB architecture, with its highly automated functionality, I had envisioned at least 6 months (in January 2012) until I had a proper website implementation, but as it turned out after three months of on and off work I have abandoned it in favor of a the better Liquidfeedback platform.

Page 37: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

36

p. start PR job, create contacts, interest, communicate, promote (December-January)

q. start monitoring the website functionality (January, if finished)

r. further refine the concept for a professional iteration (January)

s. find a job, placement, promote my design for PhD, bursary (September-forever)

Installing the LF software

To make a long story (months) short, here is the install process for the LF platform,

described by my supervisor as a “record the development activity… a record of achievement, a

lesson learnt log and a plan for future activity".

1. Initially I was planning to run the Debian OS, required by the LF platform on my

Alienware MX17 PC, in a dual boot system, but after a few days, having frustrated

myself with failing to install Linux, I came to realize that because of previous RAID

metadata 110, meant that Linux failed to read my Hard drive properly (I installed

Debian/Ubuntu alternatively about 10 times)

2. I frustrated myself installing Debian (the least friendly Linux) on another laptop

(salvaged literally from a garbage heap), only to find out that the WIFI board was

fried, so my work in finding the right drivers (not included in the Linux Debian

distribution for some random principle) proved useless111. Eventually, after a

couple of days, I managed to set up my phone as a WIFI connector, through using

a non-standard command 112for dynamic IP's, a little thing that came along since

Linux was last updated properly.

3. After getting Debian to work I was finally ready to install the LF software according

to its rather complex set of instructions113. After that, I came to the great

realization that this is Debian, an OS famous for nonstandard command line

program installations. Plus all the great repositories it's bragging about are poorly

managed and I had to chase program dependencies one by one and install them

manually114. This single small line of code at the beginning of the FAQ took me

another couple of days:

110

I was one of the lucky few people in the world who got to experience a RAID controller failure on his PC about a year ago. Basically, it can’t be fixed and it destroyed one of my Harddrive bays along with a HD. 111 At this point my Computer Science housemate wished me luck and ran away, unable to sort out the mess of not knowing if the drivers, the hardware or simply the Debian OS was at fault 112 I’ll never forget the “dhclient usb0” command as long as I live. 113 http://dev.liquidfeedback.org/trac/lf/wiki/installation an installation which took me through C++ and PostgreSQL command functionality, from things I was seeing for the first time (SQL) to things I was seeing after a very long time (C++). For technical details see the webpage. 114

Nearly 60 individual pieces of software (the installation of which is not straight forwards as in Windows/Mac) “Apt-get install” and “apt-get update” might be useful theoretically, but they didn’t do much for me.

Page 38: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

37

apt-get install lua5.1 postgresql build-essential libpq-dev liblua5.1-0-dev lighttpd ghc

libghc6-parsec3-dev imagemagick exim4

4. After installing everything, I received a server error, which I had difficulty

addressing. It seems that the LF frontend 115program dependency WebMCP had a

series of issues with accessing its own Lua5.1 dependency. I tried fixing it on my

own for a few days with no success116, then contacted the LF support service.

Unfortunately, as I was to find out the LF support was provided by someone

unfamiliar with WebMCP117, which only gave me the generic answer to reinstall it

all.

5. Pressed for time, I abandoned the server instance installation and focused on

creating a case study for proof-of-concept in regards to the enhanced visualization

of Timeglider.

115 The interface of the LF platform, which manages user interactivity. 116 Mostly moving libraries around 117

While Jan Behrens is one of the LF platform programmers, the WebMCP application is maintained by someone else, whom I was unable to contact directly. I agree with Ian and as soon as I have the time I will purge the Lua5.1 libraries and reinstall a newer version of WebMCP (which appeared in the meantime).

Dear Paul,

please try the following components with the following version numbers:

- WebMCP v1.2.4

- PostgreSQL 8.4, 9.0 or 9.1

- LiquidFeedback Core v2.0.11

- LiquidFeedback Frontend v2.0.2

Those components should work together.

If you experience problems installing these components, please write

another mail. It would be helpful to paste the error messages in the

email rather than taking photos. That way we could read them more easily.

Regards

Jan Behrens

On 08/11/12 10:11, Paul Suciu wrote:

> Hi, guys,

>

> I have encountered a series of issue on installing WebMCP on my Debian

> amd64 distro, as a precursor to Liquidfeedback. Since my Masters

Fig.15 Copy of the

reply Jan Behrens, LF

developer, provided

to my request for

assistance on the 11th

of August 2012

Page 39: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

38

Without the controllable testing instance118, I could not access lfapi (the LF test

functions) which meant any API integration work done so far could not be properly tested119. I

then decided that the academic paper coming in a little more than a month needed a proper

visual demonstration 120and I proceeded to operate on the client-side programming.

A short theoretical introduction to timeline visualization

A timeline is a visual narration and should have a start, a peak/volume and an end. At

the same time it could be construed as a knowledge network module representation, within

the greater website network and the even greater WWW. As the most important 121and least

self-explanatory API module I planned was the timeline, I decided to proceed with its

development. Once more, I repeat my intent of enabling the community with academic

potentiality, by allowing individual users to freely populate timelines in an attempt to assist in

presenting an issue through the LF platform. The simple circuit would look like this:

Table data loader (an automatic conversion process) – Timeglider – API – Liquidfeedback

118 To give you an idea of the task I'm attempting only to install this package, people with years of programming behind struggle and groups such as the Pirate Party of California are asking for donations so they can create their own instances, never mind modifying them. 119

For a while there, I wasn’t a theoretician anymore, but a domain developer, coming to grasps with his programmer skills limitations and trying to surpass them because of time constraints. 120

I honestly thought I had a straight chance on doing these modifications before time ran out, but as I became bogged in technical details and academic externalities, I had to finally concede in only doing the Timeglider presentation of a case study, without too many conclusions as an example of the visualization function intended for the platform’s full service. 121

Initial position at community policy analysis level, through its investigative and predictive qualities

Fig.16 A picture of my work system, from left to right, the Alienware

MX17, displaying the Timeglider timeline in Chrome for Windows, the

ReadyNas server, the recycled HP Pavilion DV6000, running Debian.

Page 40: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

39

While you are familiar with the Timiglider link to the LF platform through its API

functionality, you will also notice a new component there. That is part of the user interface

allowing him to insert timeline object values directly into the JSON file, as the current

implementation of the widget requires some knowledge of JSON/HTML functionality and

direct access to the JSON file122.

Timeglider

Why use Timeglider as a plugin? Because it’s open source, built in the best possible

manner 123and is “a lightweight, extensible time-viewer-explorer which can zoom/pan and

otherwise explore future/past events easily. Timeglider.com provides an authoring

environment for creating hosted timelines; this plug in is meant for enterprise media, medical

software, private legal workspaces, etc. all of which may have APIs of their own” (Timeglider

license file).

Surprisingly, while the actual information for the Github download hasn’t been

updated in a while 124(about a year, both on the widget site 125and the developer notes126) the

widget has been offered with FULL functionality127 to end users. Some of the programed

features of the Timeglider widget include/will include, but are not limited to:

1. Date format, localization to user time zone

2. Search function by semantic topic

3. Legend by icon (allowing for corpus selections)

4. Event attributes (about 20 different categories)

5. Images (clusters, etc.)

6. Audio/video files (not implemented in the version I used)

7. Event editor (not implemented)

8. Printing of time range or saving to PDF (not implemented yet)

9. Import parsers (RSS, Flickr, twitter, Facebook, semantic "scraping" of dates on any

webpage)

10. Custom modifications such as embedding links in modal paragraphs, multilinks, tuning

container/modal size for large screen display

122

An impractical measure for a big server. One possible option would require modifying this open source plugin http://shancarter.com/data_converter/ for Excel to JSON convertibility into a loader function similar to your Address/email/etc. filling plugin on most sites (A few prompts for Creating – Loading – Modifying data as an option to the LQ Frontend), suitable for spreadsheet table populating. 123

Open source Javascript widget with API integration for a wide range of platforms from design. It also has features of logarithmic timelines by being scalable. 124

While Timeglider started out as a great idea, free under the MIT license, for pragmatic needs has been switched to a paid website dependent platform http://timeglider.com/levels.php 125 http://timeglider.com/jquery/ they upgraded the widget site after my letter, apparently, but because they did it in the three days or so you’re still getting the old version pictures Awesome guys, have to write a thank you note. 126

https://github.com/timeglider/jquery_widget the new version solved some image bugs 127

Had to find that out by myself while operating on the widget (also saved me a lot of time I didn’t have)

Page 41: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

40

Because of selective zoom, Timeglider allows for an element of logarithmic timeline,

by identifying only the big issues at its macro level and then adding more and more issues as

we switch in. Threshold/Interesting issue – disappearing macro “There was no Javascript”

when going into detail. Suddenly appearing color macros when going into detail. The only

problem is how to structure a view in such way as to make the best macros

appearance/disappearance as to better present, visualize a complex topic?

One of the early requirements for the software input was that it had to be easy to

populate from a spreadsheet, representing a tabular format with which most users would be

familiar with. Spreadsheets are a must in data mining and any user who would have had to

spend an inordinate amount of time crawling after various pieces of information, should find

his job made easier, when the already formatted data can simply be inserted into a timeline128.

Of course, data processed through a markup language such as JSON or XML can be retrieved in

a tabular manner also 129(but a reverse plug-in must be provided), as we want the data to be

easily accessible and utilizable130.

There’s a real necessity for creating this tool as mark-up languages don’t allow for

mistakes in their body of code131. After transforming the Excel table into a JSON file with the

use of the Github.com open source Mr. Data 132plug-in, the transformation was incomplete

and I had to use a JSON validator 133which was a must, because of the high volume of data134.

128

Provided that the necessary protocols for the variable fields have been fulfilled, otherwise the data cannot be read by the software. See the Excel file annex, which was used to populate the JSON file annex. 129

An HTML table would allow for the data to be recovered with a simple copy paste and reinserted into a spreadsheet application such as Excel. 130

Excel transferable data, simple and easy to understand for accountants, which most likely my grader will be. Anyway, in the age of twitterism, shouldn’t be too hard for people to populate an Excel table, should it? After a while the Excel sheet can start to read like a timeline, if you’ve done enough entries into the Json file. 131 Because they are just data organizers, literally the code has to be flawless, otherwise it will not load. 132 http://shancarter.com/data_converter/ it allows for data formatting between a variety of languages 133

http://jsonformatter.curiousconcept.com/ 134

about 13000 words for 100+ entries and it had to be perfectly validated, otherwise it doesn't load. That was fun as I had over 200 mistakes which I had to correct manually. This sort of hurdle would truly discourage an average end user.

Page 42: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

41

Creating a timeline protocol

“Knowledge visualization shares some intrinsic characteristics with cartography – the

art of making maps “(Chen, 2002). It really does and every wrong move you make can detract

from a good user experience. So there must be a protocol beyond the software imposed

ones135.

1. Extend the logarithmic functionality of the timeline, by treating events according to a

logarithmic points system such as assigning visibility values according to this tentative

table:

Visibility 100 90 80 70 60 50 Etc.

Population 100% 50% 25% 10% 5% 1% Etc.

Costs 100% 50% 25% 10% 5% 1% Etc.

Actuality Today Days Month Past year Decade Century Etc.

City136 Alpha Beta Gamma Delta Epsilon Etc. Etc.

Geography Continent Union Country NUTS1137 NUTS2 NUTS3 Etc.

135

Such as first row naming the variables allowed in spreadsheet files (startdate, enddate, high_threshold, id, title, description, icon, date_display, importance, link image, modal_type, css_class, span_color, y_position, etc.) 136

City classification by GAWC http://www.lboro.ac.uk/gawc/

Fig.17 The exceptional

moment of the JSON file

validation, when I knew

the timeline would open

with all its cases, after

many weeks of work.

Page 43: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

42

2. Condensing information into an efficient resume of the topic by using the Modal

window. The timeline modal is an abstract sized, easy to digest mini-article, with links

for further exploration and imagery138.

3. Insert as much policy qualitative detail as possible and less quantitative data that could

be better seen by means of charts, etc. A monotonous display of same type of

information gets boring quickly. One needs to create unique categories. Not mere tags,

but points of interest. For example listing a chart is listed on the date of publication,

despite referring to events that happened on a previous date. We now talk policy, not

indexes. Interested parties, not research subjects. It took me a while and a few

hundred index entries to realize that one must have a view of policy beyond the

limited point to point of an economist/accountant139.

4. Defining/separating by visual cues a variety of issues involved in policy such as journal

articles, academic papers, political slips, official party policy, lobby groups, shadow

137

http://en.wikipedia.org/wiki/Nomenclature_of_Territorial_Units_for_Statistics (wiki link, as the main EU institutional site didn’t work at that moment) 138

The HTML formatting of this page is a must in order to enable proper paragraphs (the size of 2-3 twitter feeds, as most individuals find it hard to synthetize complex text),multi links (very important for indexing purposes) and generally a pleasant/fast experience. 139

If you want to see indexes, you can always use Excel alone to generate a chart, which you can then insert into the timeline. Don’t get me wrong, as in analyzing a complicated piece of policy or a policy corpus, I can’t encourage enough the usage of methodically attained data (charts, academic articles and quotes etc.), feature journal articles and opinions, imagery, etc. to confer legitimacy.

Fig.18 The timeline modal (abstract) with

date, topic description, various links,

images, etc. and can be HTML formatted

Page 44: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

43

policy, policy laundering, current debates, implementation, public feedback,

externalities, alternatives, background, triggers, etc. 140

Case study treatment and conclusions

I suppose that not only once during this paper read you might have asked yourself, but

where is the accounting/finance, or at least economic angle of this paper? Well here it is, or

better yet, access the live demo from the supplementary CD’s provided with the paper.

Because of the difficulties encountered in explaining a novel concept141, I decided I had

to create a large case/proof-of-concept to support my research. While case finality is less

important that the mechanism of its analysis, I still felt I had to pick a case that covered a wide

market area, both because end users will have to tackle such cases and because I was by now

used to treating large scale research.

There’s also the issue of social utility, which I mentioned in the introduction. As such I

have decided to target the policies with the largest number of affected individuals within the

easiest to understand environment and that is public policies within the EU142. That still leaves

a massive amount of work to be 143done as I based my case study on a thorny EU issue, the

creation of a unified labor market through the convergence of national markets.

Despite it being a case with interest to a wide community, depending on coverage, the

conclusions drawn in this paper will not be about social topics, but on how the data was

processed by means of the website. Safe to say, however, that I do hope this first case will

attract likeminded individuals, from a variety of backgrounds, who will assist me in building the

site. I would call this process of getting others involved a reverse feedback loop, that is people

aiming first to provide feedback because of interest and then getting hooked and building

content and launching initiatives, spurring others to do the same and helping with said

mammoth task.

It’s not that the EU and national forums don’t provide information; it’s just that its

informational structures have grown disjointed from one another, as clearly exemplified by

their heterogeneous interfaces and unlinked approaches. It’s like everyone is competing to be

bigger, not more integrated, a typical facet of monolithic bureaucracy144.

140

Enough skill should eventually show policy blueprints through the timeline, in respects to functionality, scope, agents, interconnectivity, relative power, etc. 141 At least for the academic environment 142 While my comprehension of the EU goes beyond the paper union and sees it as an integrated economic union inclusive the subservient/buffer states outside the borders I felt I needed to restrict my analysis to a familiar subject for ease of explanation. 143 Since most of even the most basic data available is squandered in a sea of information on various local language government sites, it still promises to be a mammoth task. 144

Eurostat, the EU statistical unit has a wiki, a completely expositional method of presenting data. No debate, no influence. It does however establish a baseline, a centralized knowledge repository out of which some policy issues can be reconstituted. I must confess however, that until now, I wasn’t aware of

Page 45: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

44

The crowd can bring these divergent ideas into focus and literally identify their

strengths and issue recommendations for their upgrade, merger or elimination, by accessing

deep information, such as constituent individuals, effective powers, costs involved145. A unified

method of presentation and common standards of assessment must be identified146. The case I

chose to present is the labor market, the poster child field for socio-economic policy147.

A policy timeline is a bit like a PR press release folder, where all the important issues

get put into. And because my project aims serve the individual I was interested in such issues

as minimum wage, retirement age, youth employment, etc. Catering to the common man

makes the issue of synthetizing information in an easy to digest manner, essential148.

the encyclopedia, despite it being around for 3 years of so. It just goes to show the power of Wikipedia in drowning out competition. Crowd generated information easily trumps the best the EU has to offer in terms of exposure ad content volume. 145 Because, believe me, when I say, there’s a lot of stuff out there and ironically, it’s not nearly good/ enough of it to run a 500 million group of people. Some of these groups have been around for years and I can’t believe how little production they’ve done 146 Putting together a case study on the EU labor market is a yearlong research in itself, not because it’s hard to fill a spreadsheet, but because it’s hard to chase down and decide which data is important. Exposure files often fail to follow the most basic of formats and while you can read a report that presents itself as an abstract size (perfect for a module, but useless for further investigation), you can also read a hundreds of pages report, difficult to grasp and hard to resume down after a first reading into a module. The initial image you get is shockingly complex and you strive to make sense of it and put it into perspective.

147 Unfortunately, when analyzing any such arena one must be familiar with both prevailing realities in the field and current theoretical developments, the subspecies of socio-economic policy called Labor Economics. As I mentioned before, because I aim to task the crowd user with the job of analyzing policy, I will only build this study case as a demo model of visualization. Also, this is only the first stage of the policy analysis model, so the data might appear to be rather unfocused. What I’m trying to do is to speed up and make easy market wide analysis, not unlike the type of focused research analysis I did before (despite the fact that I was clueless in many respects). 148 What you’re looking at in a window are about 1000 pages of information, structured both along category tree lines, temporal relationships (determined – determinant, syntagmatic/sub and supra-divisions, alternative/paradigmic relationships) and logarithmic importance (novelty). Ironically, because of the form of my idea, I’ve managed to break a personal cardinal rule – make sure people notice the amount of work you do ;). In this case, by design, you should not be overwhelmed.

Page 46: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

45

The great thing about my timeline is that you don’t have to possess any advanced

analysis skills when faced with extremely sophisticated information. Copy paste, follow the

rules and the big picture will appear. Then and only then you have to make up your mind, not

like in a traditional policy reading where by the end of the first paragraph you forgot what it

was about149.

149

and by the end of the first page you need a coffee to keep you up. God forbid should you attempt to read a book, because you’d find yourself years from now on an old person, regretful of a wasted life. Let’s imagine for a second that unlike a specialist, I, as a regular user, don’t have weeks to waste

Fig.19 A timeline of the EU

labor market (incomplete),

proof-of-concept for visual

representation of complex

policy topics in Liquidfeedback

Fig.20 The newest version of

the timeline, launched on the

10th of September, fixes some

issues such as automatic image

resizing and the general design

Page 47: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

46

A complex representational image forces you to frame a problem in a transparent,

manageable way when designing a presentation, which means it’s very hard to push a bad

argument without showing the holes in it even to less skilled individuals150.

I wish I could say that my attempts at populating the timeline editing process

showcased an easy and fast method of doing so, but I’d be lying. The necessity of properly

structuring data and collecting the essential bits is quite demanding, especially in the first few

days until it becomes a sort of second nature. Some might ask why I bother with so much

detail. It is because I want to retain and enhance the functionality of all this fantastic software,

to offer added value to the whole process, instead of detracting from functionality, by simply

being too lazy for an exhaustive understanding of the concept151.

because I have a busy life (and not because I only have a month until I have to present an unfinished excruciatingly complicated piece of work). 150 The Labor Timeline doesn’t look like much (as day to day policy is never very exciting) but imagine if you would be interested in one particular aspect/country/period of the market, how much time would you save by being able to quickly access the relevant topics? 151

the way that most EU economic institutions do, after spending a few weeks on their websites of institutions such as the UK government, which may even deter access, by requiring Freedom of Access paperwork, many times for invisible/non-indexed documents.

Page 48: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

47

Conclusions

Due to the modular nature of open source we have seen how it is possible not only to

create fully integrated tools of policy analysis/generation (maybe even implementation), but

also how these tools can be used to enable community functions that can prove to be far

superior to commercial ones, because of the higher stakes and capabilities said communities

possess as a whole.

This arrogance of intellectuality 152quickly dissipates when one is face with the truth of

his own illiteracy, an illiteracy reflected at the product level of the EU administrative

community as a whole. If as social plutocratic leaders we are unable to master from the ability

to formalize natural languages and create social protocols to the ability to quickly and

efficiently utilize fully formalized languages in the medium of IT153, then there’s no wonder that

the community at large, out of sheer frustration, will and should succeed in creating

mechanisms of self-governance, by exploring issues such as proxy voting154.

In respects to my paper goals I don’t believe there should be a common closure for

what I envisioned as essentially an iterative process and I’m willing to leave the some of the

matter open to frustrate reader interest into action. More so, due to the vast potential of the

theme155, it would have been impossible for me to provide such a closure.

152

While providing a visual proof-of-concept for only the initial stage of policy analysis for an already existing infrastructure, I have also shown how very complex nature of such problems both possess a challenge and demand from your average researcher skills that surpass the narrow field definitions, such as intimate knowledge of the pervasive technologies in the fields of data manipulation and user interactivity. While not everybody needs to know the complete process of crafting a community enabled knowledge network, the difficulties I encountered during the project, showed me just how removed from pragmatic implementation could be a graduate of multiple academic institutions, an intellectual by all rights . 153

Don’t get me wrong, I don’t really believe that individuals who only identify themselves with their narrow IT niche are more suited to policy generation. What I am advocating is a degree of pragmatic completeness. 154

A first and extremely important barrier to surpass is to get over the legacy of security mad institutional protocols and bypass the need to compartmentalize and control the flow of useful knowledge to such a degree that it becomes useless. While through its free knowledge repositories the community is slowly providing a solution to bureaucracy, the established directory organizations must themselves make every step in adopting open source methodology and support the emerging knowledge community in its goals. 155 And remember most of this time was eaten away not by the project itself, but the vast body of literature I had to review and synthetize in an academically digestible form, so that an outside viewer could follow the heuristic conceptual process. Remember that in the case of Michael Richardson, the Timeglider creator, it only took me half a page to convince him about the validity of my idea, which I believe spurred him into action of updating his widget site after nearly a year. Such is the convenience of being a motivated political activist, with a deep understanding of IT architecture capabilities.

Page 49: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

48

In respect to the work I’ve done, if this approach is truly needed, all I had to do is to

create a basic pattern of approach and the community user will jump at the opportunity to fill

the form. That is the wondrous nature of practical approaches, their ability to self-actualize

due to immediate necessity - “build it and they will come”. I would truly be honored if the LF

staff would bother to read my paper and upgrade some of the features of their amazing

software with features to formalize discourse adapted from academic research practice.

I also do believe I have proved that my hypothesizes 156were valid:

- Community enabled knowledge architecture is possible 157

- Open source has proven itself pragmatically to be vastly superior to closed repositories

when it comes to its ability in allowing those with the least resources, who need it the

most, to access the general body of knowledge.

- The LF platform, through is rapid adoption of the current version represents an

important step in the popularization of such efficient technologically enabled

mechanisms for decision making at community level, raising both the prospect of

better self-governance and the need for higher IT literacy (at community level).

-

- Additional functionality has now come within the reach of neophytes such as myself,

with the propagation of modular software, that follows in the design principles of

object-oriented programing158, and whose principles for efficient organization can be

further transplanted at community level. As such a very useful visual module such as a

timeline could be construed for implementation in relatively experimental software.

- Calls for enhanced visualization of complex knowledge structures such as policy have

been raised for a while, but have yet to be implemented properly both at the level

while some other points I was keen to make out are more self-evident, what hasn’t

really sunk at the academic community level is the quality of open source software

available. Take for example Timeglider, a free, easy to manage timeline plug-in, vastly

156

Even thou I didn’t formally identify them as such and there were quite a few more than is traditionally prescribed by academic research, because of the size of the research. In the respect of identifying gaps in knowledge, after reading through the body of knowledge one comes to the conclusion that these gaps are so obvious that they might as well represent truisms. In that respect the paper could hardly be considered scientific and could be construed as some kind of manifesto. However, as I point out from the start, this paper isn’t concerned with showcasing gaps in the body of a selectively maintained closed type of expertize repository, but addressing the issue at a pragmatic level, as it exists and is delegitimizing the current social constructs. 157 it already exists in the form of free knowledge repositories (community of knowledge), debate platforms (forums, etc.) and, just beginning, in the form of IT facilitated consensual decision making (Liquidfeedback, adhocracy). More than that, as the evolution of such platforms for self-expression and actualization proves, it is self-organizing and in a search for further structuring . 158

Most interactional software used for visualization is built in object oriented platforms (some ubiquitous examples from the project are Javascript and PHP)

Page 50: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

49

superior to most commercial159, website dependent other offers, offered by the

creators against a sole collateral, the community supported social contract that the

software will be employed for the common good, as to justify the vast amount of time

and capacity invested by its makers.

I should perhaps complain about the lack of time 160in managing such a vast concept,

but truly the only complaint I have on that side, is that I didn’t manage to make better use of

my time. Even so the intensity with which my brain functioned in the final weeks of the project

was tremendous. I never imagined that having to conceive and manage all aspects of a

knowledge based research could be so rewarding and offer such insight. There are of course a

few areas of knowledge, besides the project, that I would like to explore further:

- mathematical models for knowledge representation and knowledge space

construction

- Schulze method for proxy voting and Game Theory practical implications

- semantic search models

- Usenet interactivity studies

As a part of this endeavor I plan to also help popularize my activities, with similarly

interested individuals/institutions, participating in open source developments and research

forums such as the ESRC161 for a possible PHD funding within the UK162.

Economics cannot be simple numbers. It must be political economy, graph theory,

algebra and semantics, etc. if we are to truly concern ourselves with relationships of

production, just as from a research perspective we can’t all be number crunchers as there’s no

more room there. A true researcher must forge his own niche, anyone else that follows him

there is simply a student of the concept originator163.

All in all, this is a manual on how a single individual has managed to implement a

pragmatic process of policy analysis that could benefit a community, which is what I asked for

in my research question.

159

The switch from the MIT to the limited license has also made me keenly aware of the difference between “free” and “open) http://opensource.org/licenses/mit-license.php 160

I suppose I always knew I wouldn’t end up with a finished product, but the idea wasn’t me finishing the project out of yet to be properly tested software, but to use the momentum generated by my master’s degree and set up the bases for a future PhD research project. I knew it was going to be hard to manage such a project, but I truly wanted to prove that it is possible and that the issues aren’t insurmountable. 161 http://www.esrc.ac.uk/ Economic and Social Research Council (dealing in socio-economic policy, with the motto “Shaping Society”) 162 while I’m not sure if this type of project is suitable for a 3 year PhD, since in 3 years’ time, this idea will either blow up or blow over, through my involvement within the context of IT enhanced knowledge visualization I hope to generate externalities far above merely satisfying academic requirements. 163

Pragmatically speaking, the best job you can have in this world is the one you can conceptualize for yourself and that serves a real social need, as there’s probably no competition and you’ll have first entry and standard setting privileges.

Page 51: Creating a Knowledge Architecture for Socio-economic Policy Generation

Knowledge architecture for socio-economic policy analysis

50

Another economic

immigrant arrives in England

Difficulty getting employed

within the current

accounting/financial

environment

An uneven market makes

academic advantage moot,

as anyone can function in a

no rules playfield

Economic environment can

only be stabilized through

formalization with socio-

politic intervention

Current mechanisms of

change are ineffectual,

serving not community

interest, but lobby ones

The reason for this

ineffectuality is the highly

distributed nature of

government plutocracy

The open source community

provides a much better

example of data integration

and social utility

Traditional means of

engaging the general

community have been

forums, blogs, etc.

First project iteration,

combining PhpBB forums

with blog style exposure

drafts Recognizing the ineffectual

nature of traditional social,

commercial platforms in

dealing with complex issues

The need to formalize policy

by submitting it to complex

semantic analysis, with the

assistance of IT networks

Discovering the Liquidfeedb

political approach as

opposed to academic

research approach

Attempting to combine

intent and knowledge into a

community of production

The concept of community

populated timeline,

dictionary and other tools

added to Liquidfeedback

Need to empower said

community with necessary

analysis tools at common

level to avoid plutocracy

The means to provide quick

access to huge amounts of

data is to take advantage of

visual/semantic brain

Finding Timeglider, an open

source timeline, created

specifically to assist in

community organisation

The need to create an

interface for allowing users

to both easily populate the

timeline and get back info

With each iteration moving

closer the creating a working

model of a community policy

generating architecture

Fig. 21 Iteration

spiral with heuristic

elements