Saturday, May 23, 2020

Formal Methods - A usabiliy perspective [Mapping]

In my last post I started a discussion about the classic usability book "The Design of Everyday Things" by Don Norman. I talked about the concepts of "Affordances" and "Signifiers". These are the primary mechanism by which users interact with design. Affordances refer to a relationship between an object, the functions it can perform, and the operations that can be performed by a user on it to achieve these functions. Signifiers are part of the design that help users identify the presence of an affordance. Signifiers further indicate to a user "where" a particular user-action is to be performed to achieve a function. I tried to illustrate these concepts with examples illustrating a few key formal methods artifacts ("things"). 

In this post, I would like to introduce another concept -- Mapping --  used by Don Norman in his book and try interpret it in the context of formal methods.

Mapping

If signifiers are part of the design that indicate how a user can interact with an object, "Mapping"s locate and relate signifiers to the context in which a user would interact with the artifact. Many of the examples of mappings given in the book have a "spatial" flavour to them -- the layout of light switches "mapped" to the actual physical layout of lights. A good mapping allows a user to understand and predict what their action would achieve.

Let us take the example of a type-system that I introduced in my last post. The artifact under consideration is a programming language; the "affordance" is the ability to construct a program having some properties (lack of runtime-errors), and the "signifiers" are type annotations. Here, a "mapping" is the relationship between program being written and the type-annotations. 

Most programming languages have a way of annotating types in a way that follows the syntax of a program. In this sense the mapping is as close to perfect as it can get! I would like to give a couple of examples where the mapping of the type-system works well, using a couple of different programming languages.

Example 1

Below is a simple C++ function to calculate the maximum element in a list. The type annotations here are indicated in bold text.

unsigned int maxVector(std::vector<unsigned int> inputVector) {
    unsigned int max = 0;
    for(unsigned int elem : inputVector) {
        if (elem > max) max = elem;
    }
    return max;
}

In this example, it is clear (if you are aware of the basics of C++ syntax) that the function maxVector takes as input a vector of type std::vector<unsigned int> and returns an unsigned int. Equally, the for-loop in the program iterates over each element of inputVector and the conditional updates the local variable max with the maximum element seen so far. The type of max matches with the return type of the function and so the overall program is well typed and we have some assurance that it is "correct".

Example 2

A similar implementation in a functional programming language OCaml is given below. Again the type annotations are in bold text.

let maxList (ls : int list) : int  =
    let max (x: int) (y: int) : int = if (x > y) then x else y in
    List.fold_left max 0 ls   
Let us also look at situations where the mapping is not great. How many among you have struggled at one time or another with getting the types right when writing a functional program. 

When things go wrong

The utility of mapping in the case of type-systems is really seen when things go wrong; the mapping provided by the type-system to the program plays a big role in localizing, identifying and fixing the program. Consider the example below of a modified version of the examples above

Example 3

unsigned int maxVector(std::vector<unsigned int> inputVector) {
    int max = 0;
    for(unsigned int elem : inputVector) {
        if (elem > max) max = elem;
    }
    return max;
}

Compiling the example above will give the warning:

hello.cpp: In function ‘unsigned int maxVector(std::vector<unsigned int>)’:
hello.cpp:9:18: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
         if (elem > max) max = elem;
             ~~~~~^~~~~
Basically, there is a problem in the expression (elem > max) and a quick inspection of the code allows us to easily localize the problem to the declarations of elem and max, which obviously do not match: the declarations
  • int max
  • std::vector<unsigned int> inputVector
clearly map the types to variables, and allow localization of the problem to these two declarations.

Note that the warning provided by the type-system is related to the notion of "Constraints" that I will talk about in a future post.

Let us take this example further

Example 4

template <typename T>
auto maxVector(std::vector<T> inputVector) {
    auto max = 0;
    for(auto elem : inputVector) {
        if (elem > max) max = elem;
    }
    return max;
}
int main()
{
    std::vector<unsigned int> vect{1,2,3,4,56,7};
    cout<<"Hello World " << maxVector(vect) << std::endl;
    return 0;
}

In the above example, I have tried to eliminate explicit typing as far as possible to focus on the concept of mapping. Compiling the above code using C++14 compiler gives the following warnings:

hello.cpp:10:18: warning: comparison of integers of different signs: 'unsigned int' and 'int' [-Wsign-compare]
        if (elem > max) max = elem;
            ~~~~ ^ ~~~
hello.cpp:20:29: note: in instantiation of function template specialization 'maxVector<unsigned int>' requested here
    cout<<"Hello World " << maxVector(vect) << std::endl;
                            ^
Here, although we can localize the warning to the expression, the mapping of types to the expressions is not so simple -- some of the mappings are implicit because of the use of the template argument T and the use of auto
  • We have to look at the instantiation of the template to the expression maxVector(vect) and from there further localize it to the declaration std::vector<unsigned int> vect{1,2,3,4,56,7}
  • We have to use the knowledge that the literal 0 in the declaration auto max = 0 is actually interpreted by the compiler as an int
  • Finally, we can localize the problem to the mismatch between the types of vect and max.
This was a much more difficult exercise to decipher the error than Example 3, and this can be attributed to the fact that the implicit type rules in C++14, can result in much more complex mappings of the affordance of types.

A similar argument can be made in the OCaml example. In Example 2 above I have been explicit in terms of type-annotations. But OCaml (and many functional languages) are typically equipped with type-inference systems that do not require the programmer to annotate types. The following is a completely valid program that achieves the same purpose as Example 2

Example 5

let maxList ls =
    let max x y = if (x > y) then x else y in
    List.fold_left max 0 ls   
As you can see there are no type annotations to be seen anywhere -- this is a particularly complex mapping of the affordance of he type-system: it is missing altogether! 

Although type-inference allows the programmer to type less code and is arguably less "ugly" with fewer intervening annotations, the implicit mapping makes the affordance of the type-system much more difficult to use. 

I am sure many of you have at some time or another torn out your hair when trying to fix a "type-error" in your program: in ocaml, you end up adding random type-annnotations at various points in the program, and in C++ you just wilt when faced with a 8000 line error message to do with templates that expand your horizons in terms of header files that you have never heard of before!

Conclusion

In the post above, I have tried to explain the notion of mapping when applied to the affordance of types provided by a programming language -- it is something that helps you "localize" and "locate" the affordance with respect to an artifact and helps users comprehend the relation between affordances and the artifact. In many cases it is spatial, and the example given in the book of mapping is spatial in the geometric sense: 
the layout of light-switches on a switch-board mirrors the geometry of lights in the ceiling.
In the case of programming languages and type-systems it is also spatial, but not quite geometric.

Sunday, April 26, 2020

Formal Methods - A usability perspective [Affordances and Signifiers]

This is a repost of a post in https://acm-fm-blog.blogspot.com/2020/04/formal-methods-usability-perspective.html

Welcome back to the ACM FM Blog. It has been a long time (5 months!) since I posted the first of what was to be a "series of posts" on usability of formal methods. I am afraid I have not lived up to my promise and initial enthusiasm. My excuse is that I am a novice when it comes to blogging and I was trying to "complete a post" -- and have been spectacularly unsuccessful. I have since come to the conclusion out that it is probably better to post something regularly and keep a dialog going rather than waiting for a "complete" post -- even at the cost of having to retract various statements and opinions and in general embarrassing myself. So, here goes!

I have been reading the classic book "The design of everyday things" by Don Norman; and would like to share some of my thoughts as I have been going through this book. As an aside, about the reading experience itself: I have been using a combination of Kindle and Audible to go through this book. I have found that listening to Audible while simultaneously reading on Kindle gives me a much more "immersive" experience. I am sure it would be better with a physical book (I can drop it, fold it backwards, scribble on it), but I do seem to enjoy the voice component quite a bit. I would encourage you to try it as well.

The book has a number of things that we can learn about in general, but my focus was really to try and relate it to formal-methods problems and solutions. Let me start with the very first "new" word I learnt from the book - "affordance". Now, I had not come across this word till I was working on product design at MathWorks; and the way MathWorkers were using it seemed to be a fancy way to talk about a graphical widget. The typical phrases I heard were: "what is the affordance for doing xyz", "is this affordance discoverable?" etc. (I have introduced another word "discoverable" that I will cover in a later post).

A Wikipedia search gave results that were not completely easy to understand and a some parts of the entry also referred back to this book! The explanation that I liked in the book is that an "affordance" is a "relation" between an object, the functions it can perform, and the operations that can be performed by a user on it to achieve these functions. A simple example is a chair that "affords" sitting. Trust me, his is not as trivial as it sounds! Most of us would have the image of a chair (from Wikipedia) like this. This is great and it is obvious that this chair "affords" sitting. But how about this chair, also from the same Wikipedia page? I am sure you will think twice before sitting on it. Another example is this chair -- I use a chair like this regularly, and I have seen so many people trying, unsuccessfully, to figure out how to sit in this chair (BTW, this is how you sit in this chair). Yet another interesting example is this -- I have still not figured out to use this chair without injuring myself!

Anyway, the point I want to make is that all the chairs in the last paragraph "afford" sitting. What they differ in how the design of the chair "signifies" the function it can perform to you (a user). A "signifier" is also a term introduced in the book as a mechanism for communicating an affordance which may otherwise not be perceived by the user. If an affordance is not perceived by a user, how can it be usable? Signifiers play an important role in the design of everyday things!

With the chairs, an interesting aspect to note is that "signifiers" are not completely context-free -- I am sure there are some of you for whom the way to use the different chairs is obvious: you have some background knowledge and are able to use the chair with no problems. This problem of "knowledge" is also addressed later in the book and I will not talk much about it now (another topic for a future post!)

Now let us apply this to formal methods and the work we are all involved in as a community. What are some examples of "affordances" and "signifiers" that we can associate with? The answers are not immediately obvious, but some examples I would like to briefly discuss are:
  • A function "affords" application, and a relation "affords" satisfaction
    • The signifier for this is just the mathematical existence of the function/relation itself 
  • A finite-state automata "affords" a regular-language (and vice-versa)
    • The signifier is the fact that you can follow arrows in the FSM bubble diagram and this leads to a language -- the fact that this is always a regular language is not at all obvious and is a deep result!
  • A type-system "affords" programs that avoids run-time errors (sometimes!)
    • The "signifier" of a type-system are the type-annotations that a programming language requires (or infers). Again the fact that this will ensure programs that do not have run-time errors is not at all obvious (or even true!)
  • A program "affords" satisfaction of a specification
    • Here the affordance is not so obvious: the object is a pair of program and specification. Together they beg the question of satisfaction: this is the utility/function of the object -- to demonstrate the satisfaction relation. This function itself it not at all trivial and I do not think there is any "signifier" for this. Maybe this needs to be broken up into smaller pieces.
  • A proof-tree "affords" a check that it is well-formed and is a proof in a particular logic
    • The signifier here is the structure of a proof-tree -- these are generally designed to indicate the inferences performed, and the rule used for each inference. The function of being checked is "afforded" by the tree-structure of the proof.
Now some of the above examples may seem a bit artificial, but I believe this is an exercise worth doing. It will focus our minds on who the users of our work are, what they are trying to achieve and how we are empowering them to achieve their goals. I believe that thinking through these concepts will help us get clarity on the value we provide  as a community and help more effective communication of this to the rest of the world.

In the above examples I have avoided using tools and algorithms that are based on formal-methods -- I wanted to focus on the core artifacts that we design and use in formal methods themselves rather than the program that executes them. The affordance of a program is to "execute it", and as a community I feel we have focused on coming up with "magic" algorithms which solve difficult problems; and have struggled to explain this to the rest of the world. For the world, the main affordance of a program is to execute it -- and this does not help explain the problem that is solved!

I hope this post directs you to think a little bit about affordances and signifiers. Do you have examples in your mind that you could share? I would love to hear your ideas.

Thursday, October 7, 2010

Functional vs. Component architecture

I have frequently heard software engineers refer to functional architecture and component architecture while discussing the software design process. The picture I often get is that the software development process first fixes the functional architecture and then, the components are identified (forming the component architecture) of the software. Now the question that naturally comes to mind is why components are different from functions? What are the driving forces for design of the functional and component architecture?

Taking the first question, in an ideal world, there is no real reason for components to be different from functions. But, consider what would happen in a large system with thousands of functions: all these thousand components would have to be tracked for changes, and configuration management would have to comprehend the links between changes in these thousands of components. This is a difficult job, but it becomes an order-of-magnitude more complex when we consider a "product-line" development environment where multiple variants of functions are being maintained simultaneously. This complexity can be reduced by bundling up multiple functions into components -- we have to track smaller number of components for configuration management.

So what metric should we use to design components? Which functions should be bundled together? Since our metric is to minimize the number of versions that we need to track, we need to put functions that "change together" into a component. From a product-line perspective, we need bundle together functions that "appear together" within products of the product-line. Both these metrics can be easily identified by looking at the version-control history and the product-line bill-of-components respectively. These are not the only two dimensions, there may be other criteria -- for example bundle together functions that need access to similar computing resources (floating-point arithmetic for instance).

Sunday, August 9, 2009

Why I do not like Statecharts

Statecharts are used all over the place for "Model Based Design" of software systems. I have always felt uneasy about the "graphical" programming language -- the marketing speil around statecharts always seemed too glib to me. Only recently have I been able to articulate this uneasiness clearly enought to satisfy myself.

First let me list down my complaints against statecharts and then address them one by one:
  1. Do Statecharts actually reduce the cost of software development or does it just move around the costs?
  2. Are AND and XOR actually good enough to express requirements and designs?
  3. Are Statechart models specifications or are they implementations?
  4. What is this "concurrency" all about?
One of the big "advantages" of Statecharts is that it is a "high level" "Domain Specific Language" that effectively addresses the problem of programming complex mode based behaviour (especially in particular domains such as embedded systems). However, it seems to me after maintaining and programming applications for a couple of months using Rhapsody, that this is not true. Statecharts give a false sense of simplicity for complex logic, and this results in engineers quickly adding a transition here and changing a guard there in order to modify the behaviour of an existing statechart. The generated code is complex enough that no one bothers to look at the code and/or reviews it. The fact is that Statecharts do not simplify complex logic, they just provide a couple of mechanisms for "decomposing" the complexity. People often do not realize that this decomposition also has to be very carefully done to avoid unintended behaviours, and Statecharts do not provide any mechanism for "analysing" the decomposition. In fact, my guess that the most effective part of development using tools such as Rhapsody is the fact that it helps in organizing interfaces between different modules using concepts such as ports. For example, if there is a push in an organization to adopt Model Based Development for even legacy code-based projects (and this is indeed there in a number of organizations), the last part that I would convert to "models" is the behaviour -- converting code to a model will not add to the maintainability of the project in any way.

Coming to the structuring mechanisms actually provided by Statecharts, AND and XOR sub-charts are what come immediately to mind. First, I feel that the way in which these are presented is not very helpful -- saying that AND charts help to model "concurrency" is very unhelpful. It is much more valid to say that AND charts allow decomposing complex behaviour into "orthogonal" state machines. Basically the emphasis should not be on the concurrency, but on a particular way of decomposing complexity. In fact, the theory of automata gives TWO canonical ways of decomposing state-machines -- one is the "parallel" composition which has been popularised by Statecharts; and the other is "serial" composition which is infact more generic then "parallel" composition and subsumes it. Similarly, the XOR decomposition of statecharts is used to "zoom" into one part of the behaviour and specify it; but the XOR decomposition as provided by statecharts is limited by "graphical" limitations -- the zoom is much more naturally specified by logical predicates, and a graphical notation is a very poor representation of the "boolean algebra" structure of the zoom predicates-- this is a case of a "picture being worth ONLY a thousand words"

There is this idea that Statecharts are good for "specifying" behaviours. The act of specification is more than just putting down on paper our understanding of a behaviour -- this is simply a task of transliteration. Specifying behaviour (especially in early stages of software) implies that the specifier does not yet fully comprehend all aspects of the required behaviour. In this case the specifier needs tools for "exploring" different aspects of the behaviour, of checking if he has contradicted himself in places, and if he has ensured that all corner cases have been considered. Statecharts do not provide tools for doing these activities.

Let us first consider the issue of "contradiction": in Statecharts we could map this to "non-determinism". However non-determinism is clearly not the same as "contradiction" - non-determinism could arise because we have abstracted out some details while specifying and in this case is a signal to "un-abstract" some parts. This is what most software theoreticians/practitioners mean when they talk of the "power of non-determinism" -- they are really talking of "abstraction". In a specification such non-determinism should not occur unless there really is no-decision to be made, and if any one of the non-deterministic behaviours is acceptable in the system -- if this is not the case it is really a case of too much abstraction in the specification. The other case for non-determinism is when we want to have an environment that we do not control (or build) -- in this case, a specification will not lead to implementation and issues of the quality of specification are pretty much moot.

Now for "completeness". The semantics of statecharts has some notions of "undefinedness", but this has more to do with the so-called "temporal paradoxes" introduced by the semantics -- some variants such as Stateflow do not have a logical paradox, but lead to a very complex situation where the language designers have thrown up their arms and swept the problem under the carpet by "disallowing" it. Coming back to the semantics, it is pretty much an execution semantics and the focus is in ensuring that as far as possible, there is a well-defined behaviour for all possible inputs. When the semantics itself tries hard to "complete" a specification, what hope is there for talking of "incompleteness" in the specification??????

Finally, what is all this talk of modelling "concurrency" in statecharts? All that statecharts provide is some syntactic sugar for "parallelism" (see also my rant about "parallelism" and decomposition above). This again only masks the complexity and does not help in analysing the complexity. If I really want to have a high-level model of concurrency and message passing, I would rather use something like Promela and the model-checking support provided by SPIN, which actually helps in analysing the concurrent behaviour.

All in all, Statecharts were proposed in the late 80s and they were a breakthrough in computer science in many ways. They were proposed by Harel who was (and is) one of the foremost minds in computer science. However, I feel that because they were proposed by such a towering figure, and because of the considerable tool support, there are not enough people challenging it and questioning its "marketed" utility. Two decades down the line, I think it is high time we seriously re-evaluated Statecharts from every respect.

Wednesday, April 22, 2009

Isomorphism vs. identity

I came across a very interesting post by Vaughan Pratt on the FOM mailing. The post contrasts the notions of identity and isomorphism. The distinction between these concepts was first forcefully brought into focus for me when studying category theory. All constructions in category theory are only characterised upto isomorphism. This emphasises that if different mathematical structures have the "same" properties, we really should not distinguish between them.

At the basic level, a category consists of objects and arrows (or even with just arrows), and identity is required only for arrows. Objects are only important upto isomorphism. A general guideline I follow for modelling some phenomenon witht categories is to try and map concepts to objects and arrows based on whether the concept requires identity or isomorphism. If identity does not seem to appear anywhere, then we need to consider using higher-order categories such as functor categories. However it is important to drill down all the levels of the categories and identify the concepts for which identity is important and model these concepts using arrows. Of course, this is easier said than done, and we need considerable practice in working out all the book-keeping and figuring out all the universal properties for all the categories involved, before we can start performing "magic" with "abstract nonsense".

Anyway, back to my original topic of Vaughan Pratt's post. Let me reproduce the post verbatim so that I do not lose the original trigger to my thoughts:
Date: Sat, 18 Apr 2009 17:14:51 -0700
From: Vaughan Pratt
Subject: [FOM] When is it appropriate to treat isomorphism as identity?
To: Foundations of Mathematics
Message-ID: <49ea6cfb.7090802@cs.stanford.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

My recent complaint that uniqueness is too often taken for granted when dealing with questions of existence brought to light the ease with which people identify isomorphism with identity: a number of people wrote to remind me that uniqueness is common in many mathematical contexts, e.g. the cyclic group of order 5, the least upper bound of two elements of a lattice, etc.

Operations of course enforce uniqueness by definition of "operation": the sum of two integers, the concatenation of two lists, the union of two sets, etc.  However in a good many of the examples, "the" was not enforced by some operation but instead only meant "up to isomorphism."

Sometimes it is necessary to treat isomorphism as identity, sometimes it is merely convenient, sometimes it is inconvenient, and sometimes it is inconsistent or paradoxical.  Here are examples of each.

====================

1.  Necessary

(a) Equivalence definitions of "lattice".  A lattice can be defined as a poset every pair of elements of which has a least upper bound and a greatest lower bound, or as an algebra with operations of meet and join satisfying the lattice laws (idempotence, commutativity, associativity, absorption).  A poset or partially ordered set is a set with a reflexive transitive antisymmetric binary relation.  The antisymmetry requirement, that (in essence) isomorphic elements are equal, is *necessary for these two definitions to be equivalent*.  The notions of least upper bound and greatest lower bound continue to be meaningful for preordered sets, those for which the relation is only required to be reflexive and transitive, but preordered sets with all binary lubs and glbs do not 
form a natural class of algebras.

(b) Birkhoff duality of finite posets and distributive lattices, as per David Eppstein's excellent article at 
http://en.wikipedia.org/wiki/Birkhoff's_representation_theorem#Functoriality 
.  The monotone functions from a finite poset P to the two-element chain, i.e. its order filters, form a finite distributive lattice P*, the filters (homomorphisms to the two-element lattice) of which recover P up to isomorphism.  No such duality obtains for preordered sets, making antisymmetry necessary for Birkhoff duality.  Actually there are two types of isomorphism entering here, the isomorphism of elements in a clique within a preordered set, and the isomorphism between P and the poset of lattice filters of P*; regarding the latter it is convenient to think of the two posets as the same, but set theoretically unsound, see 
below under "4. Inconsistent".

(c) Enumeration of structures.  Neil Sloane's Encyclopedia of Integer Sequences counts the number A_0, A_1, A_2, ..., A_n, ... of posets, distributive lattices, tournaments, and many other structures of a given size n.  This only makes sense for structures defined up to isomorphism, as otherwise every A_i would be a proper class.  (There is however a 
notion of labeled structure which permits enumeration by assuming in effect that the n elements are the numbers from 1 to n; with that convention there is one labeled set of each finite cardinality, but three labeled posets with two elements.)

2.  Convenient

We already mentioned the convenience of identifying P with the filters of P* above.  While it's not necessary to identify every countable dense linear order with the unit interval of rationals (open, half-open, or closed according to which endpoints exist), it can be convenient.  The 
same goes for a great many other structures, e.g. the free group on one generator as the group of integers, the initial ring as the ring of integers, etc.  Sometimes there is no canonical object, for example the Boolean algebra of finite and cofinite subsets of the set N of natural numbers under union, intersection, and complement relative to N is isomorphic to that of the ultimately constant binary sequences under coordinate-wise Boolean combination, with no clear advantage to either. 
  The same goes for the Boolean algebra of periodic binary sequences, which is isomorphic to the free Boolean algebra on N as well as to the (unique) countably infinite atomless Boolean algebra.

3.  Inconvenient.

The forgetful functor U: Pos --> Set producing the underlying set of each poset in the category Pos has a left adjoint F: Set --> Pos which maps each set X to the corresponding discrete poset.  Inconveniently, U has no right adjoint.  On the other hand the category Ord of preordered sets, which extends Pos, has a forgetful functor U+: Ord --> Set which conveniently has both a left and a right adjoint.  The left adjoint F+: Set --> Ord yields discrete preordered sets as for Pos, while the right adjoint G: Set --> Ord yields codiscrete preordered sets or cliques, which don't exist in Pos for posets with two or more elements.

4.  Inconsistent

The powers N, N^2, N^3, N^4 of the set N of natural numbers are all isomorphic.  Set theory promises that we can implement N^3 as either Nx(NxN) or (NxN)xN.  If we use the former then the three projections are p, pq, and q^2, if the latter then p^2, qp, and q.  This works fine as long as Nx(NxN) and (NxN)xN remain distinct (albeit isomorphic) sets. However if we identify NxN with N then according to set theory we must have p = p^2, pq = qp, and q^2 = q.  But then p, pq, and q are the only functions that can be formed from the two projections from NxN, making it impossible to obtain four distinct projections from N^4 no matter how realized using NxN.

To make things consistent we could disallow the implementation of N^3 as (NxN)xN, but then it would no longer be set theory as we know it, but some other theory of sets that we would have to work out afresh. Identifying isomorphic sets is inconsistent with set theory as 
standardly defined and used.

(This adapts to set theory a similar argument by John Isbell for monoidal categories, see Mac Lane, Categories for the Working Mathematician, near the end of VII-1.)

=====================

A benefit of mathematical logic is that it makes explicit principles of logic that we might otherwise consider inviolable, allowing us to consider the consequences of violating them by replacing them with competing rules, thereby contributing to the ongoing undermining of the efficacy of pure reason.

This raises the following question.  It is clear that arguments purporting to show either the existence or uniqueness of any given entity beg the question of the necessity of the rules used in those arguments.  That said, is the situation entirely symmetric between existence and uniqueness, or is there some reason to suppose that existence arguments might be supportable by rules having a more necessary character than those supporting uniqueness arguments?  To demonstrate such an asymmetry would not require absolutely necessary rules, only some generally accepted or at least plausible ranking of rules by perceived necessity.

Vaughan Pratt



Why am I here?

I am a computer science researcher by profession, software engineer by training, and mathematician by inclination. Over the years I have come across a number of writings that I feel have had a profound effect on me. But without any means to write down my feelings about them, the effect these writings have on me have waned over time and they quite often have been relegated to the infinte dust-bin of forgetfulness. Writing down my impressions on pieces of paper has not worked very well, as they are not conveniently accessed at later dates.

I have finally decied to try and blog my thoughts online so that I can go over the blog at a later date and try to recreate my feelings and re-live the highs I experienced originally. This is a very personal experience and I dont expect anyone to share these experiences with me. So if anyone comes across this blog and feels totally lost, I will not be surprised.

To set the tone of the blog, let me also tell you about two other blogs that I admire and follow:
  • John Baez's blog "This week's find in Mathematical Physics" comes at the top of my list. John Baez is an amazing writer and his exposition of complex mathematical concepts has been the biggest source of "highs" for me.
  • "Lambda the ultimate" is a very interesting weblog, and even though I am a lurker there, I follow it and use it as a good indication of the pulse of computer science.