subscribe: Posts | Comments

Programming languages through a hundred years

0 comments

the computers what they want?
Thus far, however, little progress on this part were observed. I suspect that a hundred years people will have to explain their desire through computers programmes.

Can cause doubt the very possibility of predicting how technology will be one hundred years. But do not forget that we have behind him almost fifty years of history programming. If you pay attention to the way in which languages evolved slowly until now, attempts to predict something that no longer seem such unproductive.

The speed of development of programming languages is so small because of the fact that in reality languages are the technology. Programming languages – a form listing. The program is a formal description of the problem you want to solve with the help of computer. So the pace of development of the programming language closer to mathematical notation, than, say, to the Transport and communication. Mathematical notation is also evolving, but not giant leaps such as technology.


 

What would neither made computers a hundred years, there is no doubt that they will be much faster than now. If the law Moore continues, computers will become kvintilliona 74 times faster. This is complicated imagine. However, it is likely that Moore's law will be untenable. All that should double every eighteen months, sooner or later faces some fundamental limit.

But even if computers become faster only pitiful million times, it will lead to at least the most drastic moves the foundations on which programming languages. Among other things, a more uses for languages, which are now considered "slow", ie, those languages that are not broadcast in a very efficient code.

Despite this, applications that require high performance, will always exist. Some of the challenges that are being addressed with the help of computers, are engendered by computers. For example, the speed at which the video must be processed directly depends on the speed with which the machine is capable of generating. In addition, there is a class of tasks which, by definition, have unlimited capacity to absorb all available resources: visualization, cryptography, modeling.

While some applications may become less effective, while others will continue to try to "plan on making the last of iron", the language to be responsible for the ever-widening range of tasks. And it is already beginning to happen. Existing implementing some popular new languages overwhelmingly wasted by the standards of past decades.

This happens not only with programming languages. This historical trend is universal. The development of new generations of technology gives the opportunity to do things that used to be considered extras. Let thirty ago people would be shocked to learn how become commonplace in our time-distance telephone calls. A hundred years ago, the news that is now one day premise can overcome way from Boston to New York via Memphis, struck all the more stronger.

I already know what will happen with all the additional resources that provide super-fast hardware for a hundred years – they will spend almost entirely wasted.

When I learned to program computers have scant opportunities. I remember they had to clean the gaps of the programs in Basic, to be placed in four kilobytes of memory from my TRS-80. The idea that all these amazingly ineffective programs gobbling it up resources, making the same thing over and over again, it seems to me blasphemies. However, it seems, here I intuition amends. I recall rights, grew up in poverty and which continues to save even on the fundamentals, such as on drugs.

But not every harmful waste. With modern telecommunications infrastructure pominutnaya payment-distance calls begins to seem krohoborstvom. If there is such a possibility, Guez take all the calls and do not take into account single-distance, which is shared by subscribers.

There are examples of proper waste, and it is incorrect and profligacy. I am interested useful. For example, waste, which will force more spending, but instead would simplify the device. What benefit can be drawn from the vast resources of rapid new hardware?

Thirst speed so deeply imbued us with our pathetic computers that without conscious effort not cope with it. In device programming languages should consciously find situations in which can be derived from increased efficiency at least some surplus convenience.

The reason for the existence of most types of data – this performance. For example, in many modern languages, and there are strings, and lists. Semantically lines – so, in varying degrees, lists a particular case, the elements of which – characters. So then do a separate data type? In fact, you do not. The lines exist only to improve efficiency. Well, not silly semantics of whether the slaughter of language tricks that allow programs to work faster? The lines in languages – this is another example of premature optimization.

If you take a set of axioms basis of language, the bad in the race for efficiency save extra axioms, which did not give the language more expressive force. Performance is important but I think that it should be otherwise seek.

I think the surest solution to this problem would be meaningless division of the programme and the implementation of subtleties. Rather than have lines, and lists of lists only do better, but to be able to give advice on optimizing compiler, which, if necessary, will allow it to provide lines in a sequence of bytes.

Since speed is not important in most programs tend to be not to worry about such trifles. The faster computers become, the more accurately it will be approval.

Reducing the amount of information about a particular program will make implementation more flexible. Specifications change until the program is written, and it is not only inevitable but desirable.

The word "essay" derives from the French verb "essayer," which translated means "try". Initially, the so-called texts, which were writing, in order to understand anything. With software the same. I think some of the best programmes were a "essay", in the sense that the authors began their work, not knowing exactly that they are trying to write.

Programmers in the language Lisp always aware of the importance of a flexible approach to the type of data. The very first version of the program, they tended to use the lists to all. This version can be so astonishingly ineffective, which accounted for on purpose to try not to think about how it works as eating steak, it is better not to think about that, from what he made (at least, to me, this is the case).

After a hundred years, programmers will want such language, which can be quickly and with minimal effort sketch first, incredibly inefficient working version of the program. At least, so it can be described in modern terms. They will say that they want a language which is easy to program.

Ineffective programs – is not blasphemies. Blasphemies – a language that forced programmers to perform unnecessary work. Not flow computer time, and a waste of time programmer – that's true ineffectiveness. And it will become increasingly obvious as the speed of computers.

Get rid of the lines can afford today. But how far you can go with this simplification of data types? There are options that shocked even me, despite the fact that I specifically worked on the expansion of their own views. For example, is it possible to get rid of arrays? Eventually, the arrays – it is only a special case of hash tables, which are used by many keys as much as numbers. And whether we are going to replace themselves hash table lists?

There are even more daunting prospect. For example, the Lisp language, which in 1960 invented Professor McCarthy, the number of missing. If Logically, a separate record for the number is not needed, because they can be represented in the form of lists: the number n can be represented as a list of n elements. The calculations can be performed in this way. It is simply unbearable ineffective.

In fact, no one offered in a number of sell lists. In fact, the implementation of the scientific work that McCarthy wrote in 1960, never planned. It was a purely theoretical attempt to create a more elegant alternative to the Turing machine. When someone suddenly took work McCarthy and turned it into a working Lisp interpreter, the number, of course, were not submitted lists, they were represented by binary values, as in any other language.

Can the development of programming languages go so far that they will not numbers as a fundamental data type? (I ask myself this question not lightly, but only to future teasing. This is pretty much hypothetical impending clash with Real, only in our case against unimaginable ineffective implementation unimaginable put enormous resources). I think it can. Why not? The future – nesiyuminutnaya thing. If something can reduce the number of the axioms of the language, this is the party to which bid should be done when t tends to infinity. If one hundred years will be a daunting idea, it may be a thousand years has all changed.

(Rasstavlyu dot the i: I do not propose to maintain all numerical calculations actually using lists. I suggest that before any further mention of the implementation is so determined basis of language. program in practice leading calculations are likely to be present in a number of binary values , but this optimization, and not part of the foundation of language semantics.)

Numerous software layer between the application and hardware support – another good way to spend extra tact. This trend can also be observed today: many new programming languages are compiled into byte code. Empirically can be considered that the interpretation of each level lowers speed tenfold. This was the price to be paid for flexibility.

Write program as a set of levels – is a powerful method, even if it is used in the annexes. When programming "bottom-up" program is written as a series of levels, each of which provides a language to a superior level. How much of your application you will be able to turn in the language for writing applications of this kind, it would be easier to reuse your code.

In the eighties of the idea of reuse in any way tied to object-oriented programming, and how many want evidence to the contrary, it seems, has not rid of this stigma. While some object-oriented code suitable for reuse, so it does not object-orientation and programming bottom up. Take, for instance, libraries: they can be reused, because, in fact, they represent a language. And no matter whether they are written in an object-oriented style or not.

Incidentally, I do not predrekayu death of the object-oriented approach. Although, in my opinion, except for some specialized applications, object-orientation nothing good programmers, it is very attractive to large organizations. PLO – a decent way of writing the odd lapsheobraznogo code to build the program as a series of patches. Large organizations have always been inclined to develop software in a way, and I think that one hundred years and not change.

Since this is about the future, is to raise the topic of parallel computing, as it seems, it is in a dog and buried them. That is, regardless of the time of our conversation, parallel computing, apparently will remain so something that happens in the future.

Nagonit whether their future ever? On parallel computation say, as something inevitable, already twenty years, and so far it is not particularly evident in the programming technique. Or affected? Developers chips already have to think about it, as programmers, writing system software for multiprocessor computers.

But in reality, the question is how much to be able to climb the ladder of abstractions parallelism? Do Applied programmers to its existence one hundred years? Or he will remain a concern of the creators of compilers, but little seen in the source code applications?

Most likely, the possibility of duplication, for the most part, rastratyat wasted. This is a particular case of my more general forecast that the additional computing resources that we receive will, for the most part, will be spent in vain.

I expect that, together with reckless speeds applicable hardware parallelism will be available if clearly reclaim it, but it will be used infrequently. This implies that parallelism, which will be distributed through a hundred years will not be intact. Rather, it is an ordinary programmer would look like this: processes can be bifurcate, and all processes will be executed in parallel.

And as is the case with specific implementations of data structures that would be done at a fairly late development stages – when optimization. The first version will usually ignore all the benefits that can enable parallel computing, exactly as was ignored advantages obtained by the submission of specific data.

Except for some special types of applications do not overlap in the programs will be distributed through a hundred years. Reverse the situation would be hasty optimization.