chitika

Saturday, February 23, 2013

about computers


A computer is a machine which helps us to calculate, simulate and store different scenarios. For example, in order to write an e-mail, instead of paper and pen first we use a software (or program) called wordprocessor which helps us enter sentences through keyboard (Input), computer's screen (output) to read, and modem (output/input) to send it to a distant relative, friend, etc.
The mechanism to simulate a regular mail into an e-mail, gave us a very fast and much cheaper medium to communicate (not a simulation anymore). Same way, using computers we can simulate other things over which we do not have any control, for example weather, behaviour of atomic bomb, behaviour of a deadly virus, Earthquake, a innovative design for a new auto, airplane, machine, etc.
Any computer has five parts Input, Output, CPU, Memory, disk (storage) . Input is things like mouse, keyboard, modem. Output is computer screen, printer. CPU or central processing is brain of the computer which controls and execute all calculations, manipulations and output. Memory (RAM)is a temporary storage to be used by CPU when doing calculations, etc. Think of it as a scratch pad for CPU. Disk is permanent storage, on which all the software and data is stored.
When you turn on your computer, BIOS (or basic input output system) which resides on CMOS (complimentary Metal Oxide semiconductor,a type of chip) has small coded software written on it which tells CPU to read the next instruction from sector n of disk x. This next instruction loads the operating system.
A database is a software which lets user to organize their data in an orderly fashion. For example consider a company which sells cookies, they have a database of which has tables (or records) of customers, types of cookies and orders. So when customer x orders n number of x type of cookies his orders is placed in table orders. There are several type of databases. Some which are simply text files with records, others which are complex with tables of information. A table of information means "an array of one type of records", for example "an array of customer names, addresses, phone numbers". A Relational database is in which there are relations among the tables, for example consider three tables with customer info, inventory info and orders info. The relations between these three tables define the relational database. So when a customer X from customer table orders Item Y from Inventory info it is put in Orders table, there are links in these tables (through primary keys, secondary and foreign keys such Social Security number, product number, order number, etc) which lets us do that and thus making it a relational database. Popular type of relational databases are Access for PCs on windows 95, Oracle, Sybase, Informix, for huge business environment (running Unix operating systems).

Internet is a bunch of computers connected to each other. It started out when about 10 computers running Unix operating systems serving US military were connected to each other and named ARPANET. Initially, users could only send an e-mail to each other, deploying UUCP (unix to unix copy using modems) method. Then more computers from universities were added to ARPANET and research individuals started sharing their notes over e-mail. Later came Usenet which was more or less a discussion forum. Then after tremendous innovations in hardware (networking), in 1992 came Web, or the software called web browser which could display pictures and text. REST is history. Some terms
Programming languages are designed to aid humans to write code for computers. Since computers only understand the language of o's and 1's, and we humans a common english like languages, several computer languages were developed which translate code to computer language. Programming languages like C/C++,Visual Basic, Java using these a person writes a code and then compiles it and creates an executable file which is understood by machine. All .exe and .class files are executable files translated into language understood by computer.

Networking constitutes of connecting PCs and other machines with each other. If you have bunch of computers in same building connected to servers through several hubs it is a Local Area Network, like in an office building. If you have many buildings with many computers connected to each other it is a WAN or wide area network, like Universities. Going by same terminology Internet is probably Global Area Network. Servers are those computers which control user access to files, and are running all the time. All of the computers which are connected to Internet are servers since Internet demands access to information round the clock. When you use modem to connect to Internet on your Personal computer, you are connecting to a computer which is connected to Internet.

Operating systems are that piece of software which communicate with computer and converts all user commands back and forth. Operating systems have four parts which are Process manager, Memory manager, I/O manager and . Popular operating systems are Windows 95, Windows 98, Windows NT, Mac, SunOs, Digital, HP-UX, Solaris, Aix, etc.


Applications are the end user products which user run on a computer. i.e. Games, Word processing, excel, word, are all applications.

Personal computers are those machines that sit on your desk at home or at office, usually running Windows 95, MacOs, windows 3.1, windows 98, windows NT workstation, and other operating systems designed for PCs. Servers usually run more robust Operating systems like Unix, Windows NT, etc. 

Computer Facts


Enjoy some great computer facts and interesting information about these amazing devices which play such an important role in our modern day lives.
Learn about parts of the computer such as the RAM, ROM and CPU as well as fun info about how we use computers to make our lives easier and more enjoyable.
 

  • Early electronic computers, developed around the 1940’s, were the size of a large room and consumed huge amounts of electricity. They were vastly different to the modern computers we use today, especially when compared to small and portable laptop computers.
  • Computers are programmed to carry out instructions. These instructions are usually very simple and require adding numbers together, moving data from one place to another etc.
  • A computer program can include as little as a few instructions to upwards of millions of instructions depending on the complexity of the program. Modern applications such as word processors, web browsers and graphic editors take large teams of programmers a long time to complete.
  • A computer’s memory stores numbers in huge amounts of cells that are addressed and can be quickly accessed by the CPU to perform calculations. There are two main types of computer memory, ROM (read only memory) and RAM (random access memory). ROM contains pre-written software and data that the CPU can only read, while RAM can be accessed and written to at any time.
  • Computers interact with a number of different I/O (input/output) devices to exchange information. These peripheral devices include the keyboard, mouse, display, hard drive, printer and more.
  • Computers are used to help link the world in the form of networks. Networked computers allow users to share and exchange data that is stored in different locations. You may have heard of a local area network (LAN) or wide area network (WAN) which connects areas of various sizes. The Internet is a vast network of computers spanning the globe that allows users to access email, the World Wide Web and other applications.
  • Although we normally think of computers as the ones we use in our everyday lives to surf the web, write documents etc, small computers are also embedded into other things such as mobile phones, toys, microwaves and MP3 players. We use computers all the time, often without even knowing it!

History and computing

Historians have made use of computers in their research and teaching almost as long as computers have been in existence. The 1960s saw the first revolution in history and computing as historians harnessed the potential of the computer to analyse far more information than had previously been possible, provide greater precision to their findings, open up new avenues of research and enable verification and comparison of their research data.
It was these advantages that led to computers becoming the handmaiden of the 'new', quantitative, social scientific and cliometric history that sprung up in the post-war years. Thus computing became indelibly, and often pejoratively, associated with quantitative history, an association that it has struggled to shake off ever since. By the 1970s, the perhaps predictable backlash had begun, even among some of those who had been converts to the cliometric cause. The 'new' history was attacked for turning historians into statisticians, slaves to quantitative analysis, narrowly focused and divorced from the people, places and events they purported to research. The debate was epitomised by the controversy over Fogel and Engerman's 1974 two-volume quantitative analysis Time on the Cross: the Economics of American Negro Slavery.(1) The authors were unfairly accused of endorsing slavery by suggesting that on economic grounds it was not an inefficient or uneconomic institution and that slaves had a better standard of living than many northern industrial workers. More substantive criticism, particularly by Herbert Gutman, could be made of the assumptions, sampling and mathematics used.(2) Even though later analysis supported Fogel's and Engerman's conclusions, Time on the Cross provided all the ammunition needed for those seeking to discredit the 'new' history as a computational tail wagging the historical dog.
Like many revolutionary movements, the first wave may have inspired, challenged and threatened the mainstream majority in equal measure, but ultimately failed to persuade them. But the flame was not extinguished entirely. The early converts retreated into the newly formed departments of economic and social history, safe in the embrace of social science faculties rather than the hostile or indifferent environment of traditional historians in the faculty of arts.
It should come as no surprise to discover then that the subsequent development of history and computing can appear as complex, contradictory and confusing as the past events that historians seek to explain. Nevertheless, the history of history and computing, its current status and future potential is revealing not only of historians' various struggles with information technology but of their wider approach to the discipline.
In their 1993 publication Computing for Historians: an Introductory Guide, Evan Mawdsley and Thomas Munck entitle their first chapter 'History and computing: the second revolution'.(3) This second coming for history and computing in the mid 1990s was based on a number of technological advances: the advent of affordable microcomputers from the mid 1980s, improvements in networking and storage capacity and the development of generic, and relatively easy, programs for common applications. Historians no longer had to grapple, sometimes literally, with paper tape, punch cards or reels of magnetic tape. They did not need to become accomplished programmers or beholden to those who were. Historical sources did not need to be reduced to a series of numeric codes, nor quantified to the nth degree for meaningful results to emerge. There were no claims that computer methods would turn history upside down; in the UK at least, it sought to persuade rather than exhort, and emphasised the computer as a tool that could help the historian do what he or she has always done, but more efficiently and effectively.
The computing historian's weapon of choice was, and remains, the database, particularly in relational form. Long text fields enabled historians to retain something of the integrity of original sources and rapid advances were made on issues such as classification and record linkage. By this time historians could also include sophisticated text analysis and retrieval software, adopted from literary and linguistic studies, to analyse unstructured or semi-structured sources, qualitative analysis tools and sophisticated statistical packages in their armoury. Furthermore historians benefited as much as anyone from improved word-processing software, email, digitisation, CD-ROM content, Usenet groups, TELNET, networked library catalogues, bibliographic tools and the general improvement of computer speed, memory and storage. At last history and computing seemed poised to break into the mainstream.
During this period undergraduate options that included computing techniques emerged, as did Master's courses dedicated to history and computing, and instruction on database techniques could be found in doctoral training programmes. The international Association for History and Computing (AHC), founded in the mid 1980s, and its journal History and Computing went from strength to strength and a multitude of national AHC organisations sprang up under its aegis. A plethora of acronyms, CTICH, TLTP, ALT, AHDS, HCC [J1] (4), reflected a concerted effort by UK research councils to establish a national network of expertise to support computer use in research, teaching and data preservation. A new sense of optimism and momentum emerged as historians adopted and adapted generic applications and tools from cognate disciplines to historical research and teaching.
The mid 1990s were, however, the high water mark of the second revolution. Even at the time, proponents such as Speck were lamenting:
And yet, and yet, while there is much to celebrate about the last decade, the fact remains that the profession is still divided between the small minority of historians who use computers as tools for analysing historical data and the vast majority who, while they might use a PC for word processing, remain unconvinced of the case that it can become a methodological asset.(5)
Indeed, Speck argued that aside from historical demography and psephology, computing had made little contribution to British history and there had been a retreat to narrative and high politics. The remaining years of the 20th century were to see little change in this position. By the early 2000s history and computing had made few inroads into traditional historical methods. The AHC withered on the vine, many of its national associations became defunct as did its journal, after some prolonged death throes. While there was no backlash as experienced in the first revolution the vast majority of historical research remained untouched by computer methods.
This stasis masked an underlying split in proponents of history and computing. On the one hand there were advocates of history and computing being recognised as a distinct discipline in its own right. Just as the life sciences have bio-informatics and geography has geomatics, then history should have historical information science (or historical information processing or computational history). Proponents of this view argued that historical sources are simply too ambiguous, relationships too complex and research questions needed to be formalised and generalised in a way that makes generic IT applications and traditional methodology unsuitable. In this view 'history and computing' is not about the application of computer technology to solve specific historical problems but a way of modelling or representing the past that is simply made operational by information technology.
On the other hand pragmatists argued that generic database, spreadsheet and statistical software could do 90 per cent of what historians demanded of them. And other software, such as text retrieval and analysis packages, could be readily adopted from literary and linguistic computing. They countered that the real challenge lay in persuading the vast majority of historians of the benefit of even relatively simple information technology, not in developing specialist historical tools and methods that would only ever be of relevance to a minority of historians. The historical information science approach risked alienating mainstream historians even further and turning history and computing into a ghetto.
The truth of course lies somewhere in between, or rather with aspects of both approaches. Historians in continental Europe, particularly The Netherlands, Germany and Russia, with their greater emphasis on theory and methodology, gravitated towards the historical information sciences approach. British historians, rooted in the empiricist tradition with their attachment to primary sources, and generally less interested in theoretical models, methodology and process than outcomes, tended towards the pragmatic approach. Meanwhile, historians in the USA, stung by the fallout from Time on the Cross in the first revolution, tended to shun quantification in any shape or form, save for a handful of historically orientated economists. These approaches, however, are not diametrically opposed to each other or mutually exclusive. They should be viewed as two points on a methodological continuum ranging from the computationally and methodologically light use of word processing, email and web searches, through the use of generic database and text analysis software that requires a modicum of computer experience and understanding of computer assisted research to the computationally intensive, technically demanding and methodologically innovative terrain of historical information processing.
If the story thus far seems one of perpetually unfulfilled potential, history and computing, in whatever manifestation, could still point to some notable achievements. The sterile debate over quantitative versus qualitative has hopefully been put to rest. Historians of all persuasions make quantitative statements all the time, the distinction being that some express these statements in words and others in numbers. Whether analysing change over time or the relationship between cause and effect it is impossible to avoid talking about extent, range, scope, degree, duration, proportion or magnitude, whether one is using adverbs and adjectives or decimal points and chi-squares.
Historians may still write in the narrative tradition but it is analytical narrative not descriptive narrative that has come to predominate. History and computing can not claim all of the credit for this development, but debates about the role of quantitative analysis undoubtedly encouraged greater precision in history writing. The historian today can also draw on decades of experience in accurately structuring relationships in databases, creating reliable record linkages and classification schemes that allow meaningful aggregation without blurring subtle distinctions. Nor does the employment of computer assisted research inevitably lead the historian to structured sources or statistical analysis. Qualitative research tools are an equally adept, if under utilised, compatriot of relational databases, as are the text analysis tools of literary and linguistic computing that can provide revealing insights into historical texts with no, or minimal, encoding to corrupt the integrity of the primary source.
In Britain we can also lay claim to over a decade of leadership in teaching history and computing, creating teaching resources, disseminating good practice and in the field of historical data creation, preservation and access. Subjects as diverse as historical demography, psephology, prosopography, elite structures, entrepreneurial history, family history, urban history, political history, social history, economic history, transport history, medical history and education history have all benefited from the speed, precision, capacity and verification that the application of computer assistance brings. Furthermore, historians have successfully tackled many technical and methodological challenges; these include data modelling, record linkage, family reconstruction, multiple regression analysis, event history analysis, simulation, fuzzy data, content analysis and Geographic Information Systems (GIS).
What then of the future – can we envisage a third revolution in history and computing? Although crystal ball gazing is a hazardous occupation and the track record of history and computing should caution against making any kind of prediction, it might well be that one is already underway. There are two aspects to this third revolution. The first is the broader context of research funding and information providers in which historians conduct their research. The second is developments within the field of humanities computing. Looking at both these aspects reveals tentative signs that developments are engaging historians in ways previous revolutions have failed to do.
The shift in research funding towards strategic rather than responsive modes, and towards collaborative and interdisciplinary research, may create an environment that is more conducive to the use of computer assisted research. It is by no means certain that this will be the case, but addressing broader strategic issues through interdisciplinary collaboration may well expose historians to computer resources and applications they would not otherwise encounter.
Of more immediate significance is the vast increase in digitisation activity by cultural heritage organisations that is putting a hitherto unimagined range of primary and secondary sources on historians' desktops. It remains to be seen if historians make the leap from using a computer to find and retrieve this information to using it to analyse it. Nevertheless, the availability of such sources, for some historians at least, removes one of the biggest barriers to the uptake of history and computing – that of creating machine readable material. One of the biggest factors in the cost/benefit analysis is the time it takes to transcribe source material in the first place, particularly for those who are uncertain about the benefits it would bring. Moreover, these developments are engaging historians with librarians, archivists and curators in issues of resource creation and discovery that did not tend to take place in the print culture and providing a useful testbed of the application of e-science and Grid technology to the humanities.(6)
If one turns to the broader field of humanities computing there are further signs of innovative work of relevance to the historical community. History and computing has always had a symbiotic relationship with humanities computing but there is evidence that the nature of this relationship is changing. No longer is it simply about adopting techniques from related disciplines, but as Boonstra, Breure and Doorn have suggested in Past, Present and Future of Historical Information Science,(7) a more collaborative relationship that includes those in the information and computer sciences. For example, the Armadillo project is a collaboration between the Department of History and the Department of Computing Science at the University of Sheffield. It has evaluated the benefits of semantic web technology for distributed historical materials involving natural language processing, data mining, knowledge management and ontologies, all aspects that have not hitherto been given much consideration by the historical community, although they have exercised those in humanities computing for some time.
There is also evidence of renewed interest in the methods and tools being developed by linguists, particularly corpus linguists, in areas such as text mining, corpus annotation, automated tagging and historical thesauri. Despite having different aims for and outcomes of their research, significant common ground exists between linguists and historians in their concern for the provenance, reliability and authenticity of texts, the challenges of variant spelling and meaning, and the opportunity that text mining tools provide to serve both communities. In this field the work of Paul Rayson (University of Lancaster) and Dawn Archer (University of Central Lancashire) in corpus annotation and retrieval, Clare Llewellyn and Rob Sanderson at The National Centre for Text Mining (NaCTeM) in text mining and Christian Kay (University of Glasgow) in historical thesauri are worthy of particular mention. Aside from the well established text analysis tools such as TACT and Wordsmith, developments such as Wmatrix, The Historical Thesaurus of English, nora project and Monk project all offer exciting possibilities.
The field of GIS has also excited historians and alliances of this with techniques such as visualisation and data mining present intriguing opportunities.# Computational 'Polichart' Cartography for Visualization of Historical GIS Patterns and Processes#, by Prof. Claudio Cioffi-Revilla (Center for Social Complexity, George Mason University) and #Integrating GIS and Data Warehousing in a Web Environment: a Case Study of the US 1880 Census#, by Richard Healy (Dept. of Geography, University of Portsmouth) are just two papers at recent AHC-UK conferences (8) that demonstrate innovative and productive applications of new techniques to historical data. The work of Gidon Cohen (University of Durham) with multiple recapture methods also demonstrates historians' continuing refinement of methods more commonly found in the natural and social sciences to the challenges posed by fuzzy historical data. Meanwhile, John Bonnett (Brock University) provides a thought-provoking scenario for the future of history and computing in #Abductive Reasoning, A-Life, and the Historian's Craft: One Scenario for the Future of History and Computing#.(9)
Such developments all provide for a more positive outlook than a decade ago, but if the third revolution in history and computing is to realise its potential several challenges remain. Research funding, infrastructure, education and training are all vital components, but above all historians need to acknowledge that their sources and methods are not so unique that they can not be abstracted and generalised to a certain level to make the application of computer technology worthwhile. Equally, those in the information and computer sciences need to recognise that the 'one size fits all' mentality of generic applications and information systems is ill-suited to academic research and even less so within the heterogeneous field of humanities computing.

 



 


Computer Facts




 

No comments:

Post a Comment

chitika