Custom Search

Saturday, January 30, 2010

What are Some of the Different Computer Programming Languages?

There are thousands of computer programming languages. These languages are used to control the way computers behave, process information, express algorithms, and handle tasks. Some computer programming languages have been around for many years while new languages, or variations on old ones, are developed every year. Computer programming languages range from the simple and easy to use to very complicated languages used by only the most technologically proficient programmers.

One well-known computer programming language is called Visual BASIC. Microsoft Visual BASIC is considered a high level programming language. It is descendant from Beginners' All-purpose Symbolic Instruction Code (BASIC), a Disk Operating System (DOS) programming language. Visual BASIC is considered simple to learn, featuring codes that bear a similarity to written English. Visual BASIC is both visual and events driven, featuring a graphical environment for programming.

Pascal is also well known among those with interest in computer programming languages. Developed in 1970, by Professor Niklaus Wirth, Pascal is an imperative language. An imperative language is one that uses computations as statements, changing program states through sequences of commands. Professor Wirth developed Pascal to fill feature gaps left by other computer programming languages. His development goals included designing a programming language that would facilitate the creation of well-structured programs, allow for implementation efficiency, and prove helpful in teaching computer-programming concepts.

Fortran is a general-purpose computer programming language that was first introduced by IBM in the early 1950s. It is one of the most frequently used computer programming languages for numerical-based and scientific-computing applications. Fortran fits into categories reserved for general purpose and imperative computer programming languages; it is also considered procedural. It is frequently used in computation-heavy areas, such as computational sciences and climate modeling.

C++ is a high-level computer programming language. It is considered general purpose and is widely used for modern computers. Developed by Bjarne Stroustrup, at Bell Labs, C++ was first introduced in 1985. This programming language was developed for the UNIX environment, allowing programmers to enjoy an easier process for writing code and improving code quality. Additionally, C++ makes it possible to modify existing code without changing it.


Often considered a superset of the C programming language, C++ includes some features of the earlier language. It also boasts compilers capable of running C programs. However, there are major differences. For example, the C programming language, developed in the early 1970s, employs structured programming concepts while C++ is object oriented. C++ was designed with the goal of enhancing the C programming language.

Language development

Language development is a process starting early in human life, when a person begins to acquire language by learning it as it is spoken and by mimicry. Children's language development moves from simple to complex. Infants start without language. Yet by four months of age, babies can read lips and discriminate speech sounds. The language that infants speak is called babbling.

Usually, language starts off as recall of simple words without associated meaning, but as children grow, words acquire meaning, with connections between words being formed. As a person gets older, new meanings and new associations are created and vocabulary increases as more words are learned.

Infants use their bodies, vocal cries and other preverbal vocalizations to communicate their wants, needs and dispositions. Even though most children begin to vocalize and eventually verbalize at various ages and at different rates, they learn their first language without conscious instruction from parents or caretakers. In fact research has shown that the earliest learning begins in utero when the fetus can recognize the sounds and speech patterns of its mother's voice.

Theoretical frameworks of language development

There are four major theories of language development.

The behaviorist theory, proposed by B. F. Skinner suggests that language is learned through operant conditioning . This perspective sides with the nurture side of the nature-nurture debate. This perspective has not been widely accepted in either psychology or linguistics for some time, but by many accounts, is experiencing a resurgence. Some empiricist theory accounts today use behaviorist models.

The nativist theory, proposed by Noam Chomsky, argues that language is a unique human accomplishment. Chomsky says that all children have what is called an LAD, an innate language acquisition device that allows children to produce consistent sentences once vocabulary is learned. His claim is based upon the view that what children hear - their linguistic input - is insufficient to explain how they come to learn language. While this view has dominated linguistic theory for over fifty years, it has recently fallen into disrepute.

The empiricist theory suggests, contra Chomsky, that there is enough information in the linguistic input that children receive, and therefore there is no need to assume an innate language acquisition device . This approach is characterized by the construction of computational models that learn aspects of language and/or that simulate the type of linguistic output produced by children. The most influential models within this approach are statistical learning theories such as connectionist models and chunking theories.

The last theory, the interactionist perspective, consists of two components. This perspective is a combination of both the nativist and behaviorist theories. The first part, the information-processing theories, tests through the connectionist model, using statistics. From these theories, we see that the brain is excellent at detecting patterns. The second part of the interactionist perspective, is the social-interactionist theories. These theories suggest that there is a native desire to understand others as well as being understood by others.

Computer programming languages and generations

Programming languages are use to write application programs which are used by end users. The programming languages are generally used only by professional programmers to write programs. The development of programming languages has improved considerably with the ease and ability of programmers to write powerful applications programs that can solve any task in the world today.

Each computer programming language has its own distinctive grammars and syntax and its own manner of expressing ideas. In principle most computational task could be accomplish by any of the languages but the programs would look very different moreover, writing a program for a particular task could be easier with some languages than the others. The various generations of computer programming languages are discussed below.

1st generation languages
The first generation computer language was machine language, all the machine used machine code which consisted of 0s and 1s. Machine language is highly efficient and allows direct control of each operation; however programmers had to write computer programs using 0 and 1. Some of the drawbacks of the first generations languages were

· Programs were difficult to write and debug

· Programming process was tedious

· Programming was time confusing

· Programs were error prone

2nd generation languages
These were developed in the early 1950s with the ability to use acronyms to speed programming and coding of programs. They were known generational languages were called assembly languages. They had the capability to performs operation such like add, sum. Like machine languages, assembly languages were designed for specific machine and microprocessor, this implies that the program cannot be move from one computer architecture without writing the code which means learning another language where you are to transfer the programs.

3rd generation languages
These were introduced between 1956 and 1963 which saw a major breakthrough in computing history with the development of high level computer languages popularly known as 3rd(3GLS). Example of the 3rd generation languages includes the following

FORTRAN – Formula Translation

FORTRAN was developed in 1956 to provide easier way for scientific and engineering application and these were especially useful for processing Numeric data.

COBOL – Common Business Oriented Languages

COBOL came into use in the early 1960. It was designed with business administration in mind for processing large data types with alphanumeric characters which were mixture of alphabet and data and does repetitive tasks like payroll. The other language was BASIC. These were the early computer programming languages in the early history of computers, since then there has been improvement and this will be discuss later.

Computer animation



Computer animation is the art of creating moving images with the use of computers. It is a subfield of computer graphics and animation. Increasingly it is created by means of 3D computer graphics, though 2D computer graphics are still widely used for stylistic, low bandwidth, and faster real-time rendering needs. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film. It is also referred to as CGI , especially when used in films.

To create the illusion of movement, an image is displayed on the computer screen and repeatedly replaced by a new image that is similar to the previous image, but advanced slightly in the time domain . This technique is identical to how the illusion of movement is achieved with television and motion pictures.

Computer animation is essentially a digital successor to the art of stop motion animation of 3D models and frame-by-frame animation of 2D illustrations. For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. For 2D figure animations, separate objects and separate transparent layers are used, with or without a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening or morphing. Finally, the animation is rendered.

For 3D animations, all frames must be rendered after modeling is complete. For 2D vector animations, the rendering process is the key frame illustration process, while tweened frames are rendered as needed. For pre-recorded presentations, the rendered frames are transferred to a different format or medium such as film or digital video. The frames may also be rendered in real time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet often use software on the end-users computer to render in real time as an alternative to streaming or pre-loaded high bandwidth animations.

A simple example
The screen is blanked to a background color, such as black. Then a goat is drawn on the right of the screen. Next the screen is blanked, but the goat is re-drawn or duplicated slightly to the left of its original position. This process is repeated, each time moving the goat a bit to the left. If this process is repeated fast enough the goat will appear to move smoothly to the left. This basic procedure is used for all moving pictures in films and television.

The moving goat is an example of shifting the location of an object. More complex transformations of object properties such as size, shape, lighting effects and color often require calculations and computer rendering instead of simple re-drawing or duplication.

Explanation
To trick the eye and brain into thinking they are seeing a smoothly moving object, the pictures should be drawn at around 12 frames per second (frame/s) or faster . With rates above 70 frames/s no improvement in realism or smoothness is perceivable due to the way the eye and brain process images. At rates below 12 frame/s most people can detect jerkiness associated with the drawing of new images which detracts from the illusion of realistic movement. Conventional hand-drawn cartoon animation often uses 15 frames/s in order to save on the number of drawings needed, but this is usually accepted because of the stylized nature of cartoons. Because it produces more realistic imagery computer animation demands higher frame rates to reinforce this realism.

Friday, January 22, 2010

Information Technology

In the 1960s and 1970s, the term information technology (IT) was a little known phrase that was used by those who worked in places like banks and hospitals to describe the processes they used to store information. With the paradigm shift to computing technology and "paperless" workplaces, information technology has come to be a household phrase. It defines an industry that uses computers, networking, software programming, and other equipment and processes to store, process, retrieve, transmit, and protect information.

In the early days of computer development, there was no such thing as a college degree in IT. Software development and computer programming were best left to the computer scientists and mathematical engineers, due to their complicated nature. As time passed and technology advanced, such as with the advent of the personal computer in the 1980s and its everyday use in the home and the workplace, the world moved into the information age.

By the early 21st century, nearly every child in the Western world, and many in other parts of the world, knew how to use a personal computer. Businesses' information technology departments have gone from using storage tapes created by a single computer operator to interconnected networks of employee workstations that store information in a server farm, often somewhere away from the main business site. Communication has advanced, from physical postal mail, to telephone fax transmissions, to nearly instantaneous digital communication through electronic mail (email).

Great technological advances have been made since the days when computers were huge pieces of equipment that were stored in big, air conditioned rooms, getting their information from punch cards. The information technology industry has turned out to be a huge employer of people worldwide, as the focus shifts in some nations from manufacturing to service industries. It is a field where the barrier to entry is generally much lower than that of manufacturing, for example. In the current business environment, being proficient in computers is often a necessity for those who want to compete in the workplace.

C (programming language)

C is a general-purpose computer programming language developed in 1972 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system.

Although C was designed for implementing system software, it is also widely used for developing portable application software.

C is one of the most popular programming languages and there are few computer architectures for which a C compiler does not exist. C has greatly influenced many other popular programming languages, most notably C++, which originally began as an extension to C.

Design
C is an imperative systems implementation language. It was designed to be compiled using a relatively straightforward compiler, to provide low-level access to memory, to provide language constructs that map efficiently to machine instructions, and to require minimal run-time support. C was therefore useful for many applications that had formerly been coded in assembly language.

Despite its low-level capabilities, the language was designed to encourage machine-independent programming. A standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with little or no change to its source code. The language has become available on a very wide range of platforms, from embedded microcontrollers to supercomputers.

Minimalism
C's design is tied to its intended use as a portable systems implementation language. It provides simple, direct access to any addressable object , and its source-code expressions can be translated in a straightforward manner to primitive machine operations in the executable code. Some early C compilers were comfortably implemented on PDP-11 processors having only 16 address bits. C compilers for several common 8-bit platforms have been implemented as well.

Characteristics
Like most imperative languages in the ALGOL tradition, C has facilities for structured programming and allows lexical variable scope and recursion, while a static type system prevents many unintended operations. In C, all executable code is contained within functions. Function parameters are always passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values. Heterogeneous aggregate data types allow related data elements to be combined and manipulated as a unit. C program source text is free-format, using the semicolon as a statement terminator .

C also exhibits the following more specific characteristics:

lack of nested function definitions
variables may be hidden in nested blocks
partially weak typing; for instance, characters can be used as integers
low-level access to computer memory by converting machine addresses to typed pointers
function and data pointers supporting ad hoc run-time polymorphism
array indexing as a secondary notion, defined in terms of pointer arithmetic
a preprocessor for macro definition, source code file inclusion, and conditional compilation
complex functionality such as I/O, string manipulation, and mathematical functions consistently delegated to library routines
A relatively small set of reserved keywords
A lexical structure that resembles B more than ALGOL, for example:
{ ... } rather than either of ALGOL 60's begin ... end or ALGOL 68's ( ... )
= is used for assignment (copying), like Fortran, rather than ALGOL's :=
== is used to test for equality (rather than .EQ. in Fortran, or = in BASIC and ALGOL)
&& and || in place of ALGOL's "∧" (AND) and "∨" (OR); note that the doubled-up operators will never evaluate the right operand if the result can be determined from the left alone , and are semantically distinct from the bit-wise operators & and |
However Unix Version 6 & 7 versions of C indeed did use ALGOL's /\ and \/ ASCII operators, but for determining Infimum and Supremum respectively.[1]
a large number of compound operators, such as +=, ++, etc. (Equivalent to ALGOL 68's +:= and +:=1 operators)

Absent features

The relatively low-level nature of the language affords the programmer close control over what the computer does, while allowing special tailoring and aggressive optimization for a particular platform. This allows the code to run efficiently on very limited hardware, such as embedded systems.

C does not have some features that are available in some other programming languages:

No direct assignment of arrays or strings (copying can be done via standard functions; assignment of objects having struct or union type is supported)
No automatic garbage collection
No requirement for bounds checking of arrays
No operations on whole arrays
No syntax for ranges, such as the A..B notation used in several languages
Prior to C99, no separate Boolean type (zero/nonzero is used instead)
No formal closures or functions as parameters (only function and variable pointers)
No generators or coroutines; intra-thread control flow consists of nested function calls, except for the use of the longjmp or setcontext library functions
No exception handling; standard library functions signify error conditions with the global errno variable and/or special return values
Only rudimentary support for modular programming
No compile-time polymorphism in the form of function or operator overloading
Only rudimentary support for generic programming
Very limited support for object-oriented programming with regard to polymorphism and inheritance
Limited support for encapsulation
No native support for multithreading and networking
No standard libraries for computer graphics and several other application programming needs
A number of these features are available as extensions in some compilers, or can be supplied by third-party libraries, or can be simulated by adopting certain coding disciplines.

Undefined
behavior
Many operations in C that have undefined behavior are not required to be diagnosed at compile time. In the case of C, "undefined behavior" means that the exact behavior which arises is not specified by the standard, and exactly what will happen does not have to be documented by the C implementation. A famous, although misleading, expression in the newsgroups comp.std.c and comp.lang.c is that the program could cause "demons to fly out of your nose."[7] Sometimes in practice what happens for an instance of undefined behavior is a bug that is hard to track down and which may corrupt the contents of memory. Sometimes a particular compiler generates reasonable and well-behaved actions that are completely different from those that would be obtained using a different C compiler. The reason some behavior has been left undefined is to allow compilers for a wide variety of instruction set architectures to generate more efficient executable code for well-defined behavior, which was deemed important for C's primary role as a systems implementation language; thus C makes it the programmer's responsibility to avoid undefined behavior, possibly using tools to find parts of a program whose behavior is undefined. Examples of undefined behavior are:

accessing outside the bounds of an array
overflowing a signed integer
reaching the end of a non-void function without finding a return statement, when the return value is used
reading the value of a variable before initializing it
These operations are all programming errors that could occur using many programming languages; C draws criticism because its standard explicitly identifies numerous cases of undefined behavior, including some where the behavior could have been made well defined, and does not specify any run-time error handling mechanism.

Invoking fflush on a stream opened for input is an example of a different kind of undefined behavior, not necessarily a programming error but a case for which some conforming implementations may provide well-defined, useful semantics as an allowed extension. Use of such nonstandard extensions generally limits software portability.