Avik’s Ruminations

Musings on technology and life by Avik Sengupta

Entropic Gravity?

with one comment

The physics world is abuzz with excitement over Eric Verlinde’s  Entropic Gravity Paper.  It suggests that differences in entropy between parts of the universe generates a force that redistributes matter in a way that maximises entropy. This is the force we call gravity. Many other papers are now coming out with similar ideas, including one that relates gravity to quantum information for the first time.  A brief summary here and a more detailed description of how entropic forces arise here

So is gravity a side effect of information? I think its a very intriguing idea, a new way to look at our fundamental forces. It’ll be interesting to watch as this gets critiqued and verified over the years.

Written by

March 28th, 2010 at 12:32 am

Posted in Technology

How do we know it?

with 3 comments

It has been one of my long term sources of frustration that developers and their managers vigorously debate various positions on how to create software better without any shred of evidence or measurement. Usually, anecdotes ad opinion masquerades as facts, and blog posts are cited as evidence. It’s as if the scientific method completely bypassed the software industry. So, when I found this presentation titled Bits of Evidence (What we actually know about software development and why be believe its true) by Greg Wilson, I was constantly nodding in agreement.

Contrasting the situation in medical research, where randomized double blind studies are the norm, he writes:

… even the best of us aren’t doing what we expect the makers of acne creams to do.

While there may be a growing emphasis on empirical studies in academic publications, that emphasis has not been transmitted to the practitioners in the field. Without a wider understanding of the emperical evidence measured in a scientific method, the industry’s state of the art will fall woefully inadequate.

I’ve written earlier about the Lutz Prechelt paper ; why would you want to get into a debate about language productivity without considering the evidence it offers? Similarly, when you estimate, or ask others for estimates, shouldn’t you consider Aranda and Easterbrook’s paper on the effects of anchoring? Or, when structuring software teams, consider Microsoft Research’s Nagappan’s papers showing bug counts does not depend on physical distrance within teams, but does depend on organizational distance.

As Wilson asks, shouldn’t our development practices be built around these facts? There is a better way. Should we call it “evidence based software development?”

Written by

October 26th, 2009 at 1:57 am

Posted in Technology

Tagged with

Its the people …

with 2 comments

This is old news for most, but posting it here to point people at, when I hear another passionate  conversation about productivity and programming languages.

In 2000, Lutz Prechelt compared several programming languages in a reasonably rigorous experimental structure, and came to the conclusion that

In general, the differences between languages tend to be smaller than the typical differences due to different programmers within the same language.

Language Productivity

This is not to say that fundamental innovation in programming languages is not possible (think garbage collection). There is also not denying the fact that languages have substantial differences, in tooling, in the culture of their communities, certainly in people’s personal preferences, and in the sweet spots of their use cases. However, it is certain that most discussion on this topic is superficial, emotional rather than evidence based, and woefully ignorant of experimental results in this area.

Written by

September 18th, 2009 at 11:50 pm

Arrow of Time

with one comment

A recent paper by Lorenzo Maccone in the Physics Review Letters has what I think is a very interesting argument about the arrow of time.

A long standing dilemma in physics is that all physical laws work the same irrespective of whether time moves forward or back. However, we only experience time moving in one direction only. Specifically, we can only observe ‘entropy’ increasing, ie you can only see an egg crack, or milk split, but will never see those phenomena reverse spontaneously. However there were no widely understood underlying theoretical basis why that should be so.

In the paper, the author argues that actions that increase entropy leave information behind, while processes that reduce entropy necessarily involve erasing information. Which means physics cannot study those processes where entropy has decreased, even if they were commonplace.

Written by

September 4th, 2009 at 8:00 am

Posted in Science

Merging and Branching

with one comment

Joe Morrison talks about branching and merging in his latest post. I was about to comment on his blog, but my response became so long that it probably deserves a post on its own, so here goes.

Many established (old school/expensive? 🙂 ) SCM systems (Clearcase, Continuus/Synergy etc) are built around making carefree branching easy, and indeed desirable. However, they also are heavily centralised systems, and thus I feel a slight sense of irony around the current hype on distributed version control, which are really all about having your own private branch(es).

The primary use-case for git in the Linux kernel development is the patch workflow: the ability to create, maintain, verify and apply individual patches. The lack of a centralised server is really an optional extra, a nice side effect.

Which brings me to my central point — I think what branching strategy to use should really be a function of your release planning methodology. Ask yourself this: Are your releases on a timed schedule, or on a feature schedule? Do you have multiple teams working on features with differing time to market, or is everybody working on a the same schedule? Are your feature sets held constant during development, or are they constantly shuffled? How often do you find yourself slipping planned features from one release to another? These and other such questions should determine your branching strategy.

A good way to think about these issues is shown in a great presentation by Laura Wingerd (that I found linked via Joe’s post), which talks about the concepts of ‘Convergent Branching‘ vs ‘Divergent Branching

There is one other consideration that matters – integration environments. For a self contained project, there is no cost to have a multitude of integration environments. However, for more integration oriented projects, even if your SCM enables you to create private branches easily, you’ll need to worry about how to test each branch. How far do you trust your unit tests to go before you’ll want to do integration testing against real backends? Do you need an integration environment for every private branch, or can you get by with integration testing after a merge into mainline?

It is unfortunate that a developer population weaned on CVS and SVN (myself included) are trained to consider merging a pain, and thus force themselves to reduce branch usage. Rather, try and get the tool to follow your requirements, and not the the other way around.

Written by

September 1st, 2009 at 12:11 am

Posted in Technology

Co-routines as a replacement for State Machines

without comments

Written by

August 31st, 2009 at 6:00 am

Posted in Technology

Dive Into HTML5

without comments

Mark Pilgrim has started writing Dive Into HTML5, and has the first chapter on Canvas online now. It addition to interesting content, its beautifully typeset, in a very “retro” style. If you aren’t seeing the fonts as in the screenshot below, get a better browser.

Dive into HTML5

Dive into HTML5

HTML5 is a quite a revolutionary upgrade to the browser, but the key outstanding question is tooling support/framework support. I’m sure it’ll come, but not sure when, and whether it will be too late. The Register has more thoughts on that here.

Written by

August 30th, 2009 at 2:30 pm

Posted in Technology

Black Swans and Dragon Kings

without comments

A new paper by Didier Sornette at ETF Zurich adds  the concepts of Dragon Kings  to our understanding of rare large events. While a black swan event is seen as being generated by the the same statistical power law distribution that generates small events, the author considers dragon king events as that exist beyond the normal power law distribution. However he suggests that the lower heterogeneity can lead to better predictability, even if it also leads to a more catastrophic outcome.

We develop the concept of “dragon-kings” corresponding to meaningful outliers, which are found to coexist with power laws in the distributions of event sizes under a broad range of conditions in a large variety of systems. These dragon-kings reveal the existence of mechanisms of self-organization that are not apparent otherwise from the distribution of their smaller siblings. We present a generic phase diagram to explain the generation of dragon-kings and document their presence in six different examples (distribution of city sizes, distribution of acoustic emissions associated with material failure, distribution of velocity increments in hydrodynamic turbulence, distribution of financial drawdowns, distribution of the energies of epileptic seizures in humans and in model animals, distribution of the earthquake energies). We emphasize the importance of understanding dragon-kings as being often associated with a neighborhood of what can be called equivalently a phase transition, a bifurcation, a catastrophe (in the sense of Rene Thom), or a tipping point. The presence of a phase transition is crucial to learn how to diagnose in advance the symptoms associated with a coming dragon-king. Several examples of predictions using the derived log-periodic power law method are discussed, including material failure predictions and the forecasts of the end of financial bubbles.

Written by

July 27th, 2009 at 11:01 pm

Posted in Finance

Tagged with ,

Tom DeMarco on Software Engineering

without comments

Tom DeMarco has an interesting article in this month’s IEEE Software, where he wonders if Software Engineering is an idea whose time has come and gone.

Reflecting on his early book on software metrics, he suggests that software projects could do with less control and more management.

I’m gradually coming to the conclusion that software engineering is an idea whose time has come and gone. I still believe it makes excellent sense to engineer software. But that isn’t exactly what software engineering has come to mean. The term encompasses a specific set of disciplines including defined process, inspections and walkthroughs, requirements engineering, traceability matrices, metrics, precise quality control, rigorous planning and tracking, and coding and documentation standards. All these strive for consistency of practice and predictability.

Consistency and predictability are still desirable, but they haven’t ever been the most important things. For the past 40 years, for example, we’ve tortured ourselves over our inability to finish a software project on time and on budget. But as I hinted earlier, this never should have been the supreme goal. The more important goal is transformation, creating software that changes the world or that transforms a company or how it does business. We’ve been rather successful at transformation, often while operating outside our control envelope. Software development is and always will be somewhat experimental. The actual software construction isn’t necessarily experimental, but its conception is. And this is where our focus ought to be. It’s where our focus always ought to have been.

“More important is transformation” … words to live by!

Written by

July 27th, 2009 at 10:33 pm

Posted in Technology

Tagged with

All those arrows

without comments

A very cogent analysis of the CDO industry over at the London Review of Books, based on Fool’s Gold: How Unrestrained Greed Corrupted a Dream, Shattered Global Markets and Unleashed a Catastrophe, the new book by Gillian Trent, the capital markets editor of the Financial Times.

The primary premise being, the first use of the CDO was to package up a wide variety of corporate debt with a reasonably low correlation for risk of default. The use of similar low correlation numbers for CDO’s backed by MBS’ however was grossly incorrect. A must read!

Written by

June 22nd, 2009 at 10:51 pm

Posted in Finance