2008年11月10日星期一

System console

The system console, root console or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader, the kernel, from the init system and from the system logger. It is a physical device consisting of a keyboard and a screen.

On traditional minicomputers, the console was a serial console, an RS-232 serial link to a terminal such as a DEC VT100. This terminal was usually kept in a secured room since it could be used for certain privileged functions such as halting the system or selecting which media to boot from. Large midrange systems, e.g. those from Sun Microsystems, Hewlett-Packard and IBM, still use serial consoles. In larger installations, the console ports are attached to multiplexers or network-connected multiport serial servers that let an operator connect his terminal to any of the attached servers.

On PCs and workstations, the computer's attached keyboard and monitor have the equivalent function. Since the monitor cable carries video signals, it cannot be extended very far. Often, installations with many servers therefore use keyboard/video multiplexers (KVM switches) and possibly video amplifiers to centralize console access. In recent years, KVM/IP devices have become available that allow a remote computer to view the video output and send keyboard input via any TCP/IP network and therefore the Internet.

Some PC BIOSes, especially in servers, also support serial consoles, giving access to the BIOS through a serial port so that the simpler and cheaper serial console infrastructure can be used. Even where BIOS support is lacking, some operating systems, e.g. FreeBSD and Linux, can be configured for serial console operation either during bootup, or after startup.

It is usually possible to log in from the console. Depending on configuration, the operating system may treat a login session from the console as being more trustworthy than a login session from other sources.

Routers and Managed Switches (as well as other networking and telecoms equipment) may also have console ports in particular Cisco Systems routers and switches that use Cisco IOS are normally configured via their console ports.

NOTE:

Free Flashcard Templates


Carl Perkins Act


central monitoring software


Electronics And Software


adtron sdd software


ibanez pickguard templates


as400 distribution software


cect u90 software


correlation sorting data


cect p9100 Software


free lightscribe software


geovision security software


xxs child modeling


External Harddrive Software


lcr meter operation


Folding Carton Templates


asian kungfu generation


piedmont ble college


rubis cube solution


nokia n800 programs


redtube videos embedded


document scaning software


sesam auditing software


audiovox xv6700 software


ird print software


fisher f75 programs


municipal ticketing software


queueing theory software


saab tech2 software


f-body vss operation

Boot sector

A boot sector (sometimes called a bootblock) is a sector of a hard disk, floppy disk, or similar data storage device that contains code for booting programs (usually, but not necessarily, operating systems) stored in other parts of the disk.

On a IBM PC compatible machine the BIOS selects a boot device, then it copies the first sector from the device (which may be an MBR, VBR or any executable code), to address location 0x7C00.

Also term bootblock is used to describe special kind of small programs, usually launched by a computer system immediately after power-up or reset from non-volatile storage like Flash ROM. When reset sequence completes, CPU and hardware enters into predefined state and code execution starts. Boot block in this sense is usually the very first program executed by CPU. Because of this fact, Flash ROM (especially NOR flash) often has special dedicated type of sectors intended to store such kind of boot programs. These sectors are also being called boot sectors but this term rather refers to dedicated hardware type of sectors because such sectors may have extra levels of protection against accidental erase and overwrite to avoid scenario when system completely fails to boot at very begin of boot sequence.

Typical bootblock stored in Flash ROM has size from 1kiB to 512kiB and usually does the following: applies extra hardware initializations, performs minimal hardware tests and then checks if more sophisticated boot loader, OS loader or other program could be started and either starts next boot program or enters special mode intended to help with system recovery. For example, on IBM PC compatible systems with Award BIOS and AMI BIOS there is dedicated small area, often about 8 kiB or so, called bootblock which starts first, checks if main BIOS part is present and valid and either starts main BIOS part or enters into special recovery mode which can recover main bios from special floppy disk (or sometimes from other media types as well). Depending on system design other systems may have similar software called "bootblock".

It should be mentioned that in this sense term bootblock often refers to exactly same program as boot loader term. Actually, there is no borders defined on what should be called bootblock and what should be called boot loader. Actually, both terms could describe same type of programs. Generally, quite small and dumb implementations of boot programs are more frequently referred as bootblocks while smarter and more sophisticated boot programs are rather usually referred as boot loaders. Some ROM-based systems may refer same program type and it's storage memory using boot ROM term instead.


Kinds of boot sectors
There are several major kinds of boot sectors could be encountered on IBM PC compatible hard disks, floppy disks and similar storage devices:

A master boot record (MBR) is the first sector of a data storage device that has been partitioned. The MBR sector may contain code to locate the active partition and invoke its Volume Boot Record.

A volume boot record is the first sector of a data storage device that has not been partitioned, or the first sector of an individual partition on a data storage device that has been partitioned. It may contain code to load and invoke an operating system (or other standalone program) installed on that device or within that partition.

To be a valid boot sector, the two-byte hex word 0xAA55, called the boot sector signature, must exist at the end of the sector; otherwise, either the BIOS or MBR code will report an error message and halt any OS bootstrapping process.

CD-ROMs usually have their own structure of boot sectors, for IBM PC compatible systems this is subject to El Torito specifications.

Non IBM PC compatible systems may have different boot sector formats on their disk devices.

Sometimes, boot sector term refers to special type of NOR Flash ROM sectors intended to store initial boot programs (such programs are usually called bootblocks or boot loaders). Boot sectors in Flash ROM are often implement extra protection from accidental erase and rewrite to avoid occasional erase of bootblock program in contrast to boot sectors encountered on disk devices where physical properties of sectors storing boot programs are usually same as properties of other sectors.


Operation
On IBM PC compatible machines, the BIOS is ignorant of the distinction between VBRs and MBRs, and of partitioning. The firmware simply loads and runs the first sector of the storage device. If the device is a floppy disk, that will be a VBR. If the device is a hard disk, that will be an MBR. It is the code in the MBR which generally understands disk partitioning, and in turn, is responsible for loading and running the VBR of whichever primary partition is set to boot (the active partition). The VBR then loads a second-stage bootloader from another location on the disk.

Furthermore, whatever is stored in the first sector of a floppy diskette, USB device, hard disk or any other bootable storage device, is not required to immediately load any bootstrap code for an OS, if ever. The BIOS merely passes control to whatever exists there, as long as the sector meets the very simple qualification of having the boot record signature of 0xAA55 in its last two bytes. This is why it's easy to replace the usual bootstrap code found in an MBR with more complex loaders, even large multi-functional boot managers (programs stored elsewhere on the device which can run without an operating system), allowing users a number of choices in what occurs next. But with such freedom, there's also the possibility for abuse as addressed in the section on boot viruses.


Boot sectors and computer viruses
Boot sectors are one mechanism by which computer viruses gain control of a system. Boot sector infector viruses replace the bootstrap code in the boot sectors (of floppy disks, hard disks, or both) with viral code.

NOTE:

Radar System Software


Data Merging Software


Temperature Control Software


Oxalic Acid Solution


Power Strip software


Charity Haircutting College


Futuristic Computer Workstation


hp jetdirect software


pdf tcl excel


DVD Copying Utilities


Paper Cutting Templates


Heat Exchanger Application


Free Obituary Templates


action item software


acoustic foam cheapest


action item templates


Mip Accounting Software


Free Flashcard software


Digicam Security Software


Non SOR data


Printhead Cleaning Solution


sbb mobile application


Polyamide epoxy Data


Download PS2 BIOS


Marvel Characters BIOS


Free Stencil Templates


Remove Watermark Software


free autodialer software


Pizzle Slang Word


free SNMP software

Software development process

A software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process.


Overview
A largely growing body of software development organizations implement process methodologies. Many of them are in the defense industry, which in the U.S. requires a rating based on 'process models' to obtain contracts.

The international standard for describing the method of selecting, implementing and monitoring the life cycle for software is ISO 12207.

A decades-long goal has been to find repeatable, predictable processes that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of writing software. Others apply project management techniques to writing software. Without project management, software projects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management appears to be lacking.


Software development activities
Requirements analysis
The most important task in creating a software product is extracting the requirements or requirements analysis. Customers typically have an abstract idea of what they want as an end result, but not what software should do. Incomplete, ambiguous, or even contradictory requirements are recognized by skilled and experienced software engineers at this point. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect. One specific method here is Software Elements Analysis.

Once the general requirements are gleaned from the client, an analysis of the scope of the development should be determined and clearly stated. This is often called a scope document. Certain functionality may be out of scope of the project as a function of cost or as a result of unclear requirements at the start of development. If the development is done externally, this document can be considered a legal document so that if there are ever disputes, any ambiguity of what was promised to the client can be clarified.

Domain Analysis is often the first step in attempting to design a new piece of software, whether it be an addition to an existing software, a new application, a new subsystem or a whole new system. Assuming that the developers (including the analysts) are not sufficiently knowledgeable in the subject area of the new software, the first task is to investigate the so-called "domain" of the software. The more knowledgeable they are about the domain already, the less work required. Another objective of this work is to make the analysts, who will later try to elicit and gather the requirements from the area experts, speak with them in the domain's own terminology, facilitating a better understanding of what is being said by these experts. If the analyst does not use the proper terminology it is likely that they will not be taken seriously, thus this phase is an important prelude to extracting and gathering the requirements. If an analyst hasn't done the appropriate work confusion may ensue: "I know you believe you understood what you think I said, but I am not sure you realize what you heard is not what I meant."[1]


Specification
Specification is the task of precisely describing the software to be written, possibly in a rigorous way. In practice, most successful specifications are written to understand and fine-tune applications that were already well-developed, although safety-critical software systems are often carefully specified prior to application development. Specifications are most important for external interfaces that must remain stable. A good way to determine whether the specifications are sufficiently precise is to have a third party review the documents making sure that the requirements and Use Cases are logically sound.


Architecture
The architecture of a software system or software architecture refers to an abstract representation of that system. Architecture is concerned with making sure the software system will meet the requirements of the product, as well as ensuring that future requirements can be addressed. The architecture step also addresses interfaces between the software system and other software products, as well as the underlying hardware or the host operating system.


Design, implementation and testing
Implementation is the part of the process where software engineers actually program the code for the project.

Software testing is an integral and important part of the software development process. This part of the process ensures that bugs are recognized as early as possible.

Documenting the internal design of software for the purpose of future maintenance and enhancement is done throughout development. This may also include the authoring of an API, be it external or internal.


Deployment and maintenance
Deployment starts after the code is appropriately tested, is approved for release and sold or otherwise distributed into a production environment.

Software Training and Support is important because a large percentage of software projects fail because the developers fail to realize that it doesn't matter how much time and planning a development team puts into creating software if nobody in an organization ends up using it. People are often resistant to change and avoid venturing into an unfamiliar area, so as a part of the deployment phase, it is very important to have training classes for new clients of your software.

Maintenance and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. It may be necessary to add code that does not fit the original design to correct an unforeseen problem or it may be that a customer is requesting more functionality and code can be added to accommodate their requests. It is during this phase that customer calls come in and you see whether your testing was extensive enough to uncover the problems before customers do.


Models

Agile software development
Agile software development processes are built on the foundation of iterative development. To that foundation they add a lighter, more people-centric viewpoint than traditional approaches. Agile processes use feedback, rather than planning, as their primary control mechanism. The feedback is driven by regular tests and releases of the evolving software.

Interestingly, surveys have shown the potential for significant efficiency gains over the waterfall method. For example, a survey, published in August 2006 by VersionOne and Agile Alliance and based on polling more than 700 companies claims the following benefits for an Agile approach.[2] The survey was repeated in August 2007 with about 1,700 respondents.[3]


Iterative processes
Iterative development[4] prescribes the construction of initially small but even larger portions of a software project to help all those involved to uncover important issues early before problems or faulty assumptions can lead to disaster. Iterative processes are preferred by commercial developers because it allows a potential of reaching the design goals of a customer who does not know how to define what they want.


XP: Extreme Programming
Extreme Programming (XP) is the best-known iterative process. In XP, the phases are carried out in extremely small (or "continuous") steps compared to the older, "batch" processes. The (intentionally incomplete) first pass through the steps might take a day or a week, rather than the months or years of each complete step in the Waterfall model. First, one writes automated tests, to provide concrete goals for development. Next is coding (by a pair of programmers), which is complete when all the tests pass, and the programmers can't think of any more tests that are needed. Design and architecture emerge out of refactoring, and come after coding. Design is done by the same people who do the coding. (Only the last feature - merging design and code - is common to all the other agile processes.) The incomplete but functional system is deployed or demonstrated for (some subset of) the users (at least one of which is on the development team). At this point, the practitioners start again on writing tests for the next most important part of the system.


Waterfall processes
The waterfall model shows a process, where developers are to follow these steps in order:

Requirements specification (AKA Verification)
Design
Construction (AKA implementation or coding)
Integration
Testing and debugging (AKA validation)
Installation (AKA deployment)
Maintenance
After each step is finished, the process proceeds to the next step, just as builders don't revise the foundation of a house after the framing has been erected.

There is a misconception that the process has no provision for correcting errors in early steps (for example, in the requirements). In fact this is where the domain of requirements management comes in which includes change control.

This approach is used in high risk projects, particularly large defense contracts. The problems in waterfall do not arise from "immature engineering practices, particularly in requirements analysis and requirements management." Studies of the failure rate of the DOD-STD-2167 specification, which enforced waterfall, have shown that the more closely a project follows its process, specifically in up-front requirements gathering, the more likely the project is to release features that are not used in their current form[citation needed].

Often the supposed stages are part of review between customer and supplier, the supplier can, in fact, develop at risk and evolve the design but must sell off the design at a key milestone called Critical Design Review (CDR). This shifts engineering burdens from engineers to customers who may have other skills


Other models
Capability Maturity Model
The Capability Maturity Model (CMM) is one of the leading models. Independent assessments grade organizations on how well they follow their defined processes, not on the quality of those processes or the software produced. CMM is now replaced by CMMI. ISO 9000 describes standards for formally organizing processes with documentation.
ISO 15504
ISO 15504, also known as Software Process Improvement Capability Determination (SPICE), is a "framework for the assessment of software processes". This standard is aimed at setting out a clear model for process comparison. SPICE is used much like CMMI. It models processes to manage, control, guide and monitor software development. This model is then used to measure what a development organization or project team actually does during software development. This information is analyzed to identify weaknesses and drive improvement. It also identifies strengths that can be continued or integrated into common practice for that organization or team.
Six sigma
Six Sigma is a methodology to manage process variations that uses data and statistical analysis to measure and improve a company's operational performance. It works by identifying and eliminating defects in manufacturing and service-related processes. The maximum permissible defects is 3.4 per one million opportunities. However, Six Sigma is manufacturing-oriented and needs further research on its relevance to software development.
Test Driven Development
Test Driven Development (TDD) is a useful output of the Agile camp but some suggest that it raises a conundrum. TDD requires that a unit test be written for a class before the class is written. It might be thought, then, that the class firstly has to be "discovered" and secondly defined in sufficient detail to allow the write-test-once-and-code-until-class-passes model that TDD actually uses. This would be actually counter to Agile approaches, particularly (so-called) Agile Modeling, where developers are still encouraged to code early, with light design. However, to get the claimed benefits of TDD a full design down to class and responsibilities (captured using, for example, Design By Contract) is not necessary. This would count towards iterative development, with a design locked down, but not iterative design - as heavy refactoring and re-engineering might negate the usefulness of TDD.

Formal methods
Formal methods are mathematical approaches to solving software (and hardware) problems at the requirements, specification and design levels. Examples of formal methods include the B-Method, Petri nets, RAISE and VDM. Various formal specification notations are available, such as the Z notation. More generally, automata theory can be used to build up and validate application behavior by designing a system of finite state machines.

Finite state machine (FSM) based methodologies allow executable software specification and by-passing of conventional coding (see virtual finite state machine or event driven finite state machine).

Formal methods are most likely to be applied in avionics software, particularly where the software is safety critical. Software safety assurance standards, such as DO178B demand formal methods at the highest level of categorization (Level A).

Formalization of software development is creeping in, in other places, with the application of OCL (and specializations such as JML) and especially with MDA allowing execution of designs, if not specifications.

Another emerging trend in software development is to write a specification in some form of logic (usually a variation of FOL), and then to directly execute the logic as though it were a program. The OWL language, based on Description Logic, is an example. There is also work on mapping some version of English (or another natural language) automatically to and from logic, and executing the logic directly. Examples are Attempto Controlled English, and Internet Business Logic, which does not seek to control the vocabulary or syntax. A feature of systems that support bidirectional English-logic mapping and direct execution of the logic is that they can be made to explain their results, in English, at the business or scientific level.

The Government Accountability Office, in a 2003 report on one of the Federal Aviation Administration’s air traffic control modernization programs[5], recommends following the agency’s guidance for managing major acquisition systems by

establishing, maintaining, and controlling an accurate, valid, and current performance measurement baseline, which would include negotiating all authorized, unpriced work within 3 months; conducting an integrated baseline review of any major contract modifications within 6 months; and preparing a rigorous life-cycle cost estimate, including a risk assessment, in accordance with the Acquisition System Toolset’s guidance and identifying the level of uncertainty inherent in the estimate.

NOTE:

Chart Recorder Software


Tax Refund Accountancy


Materials Procurement Software


Maintenance Training Software


Electronic Training Software


Maritime Inspection Certification


Vehicle Rent Software


Electronics Industry Software


geovision dipmap server


optics lens software


Fieldbus Protocol Software


Wireless Networking Device


australia discount airlines


AMR Market data


Laser System Software


Chilled Wine Servers


Workflow Automation Software


Gel Coat Application


dcc cv programs


ACAB Delije Sever


Movie Actor BIOS


levitra clinical data


Trio Federal Programs


Free Dazzle Software


IMB blade server


Porthole Windows illustrator


cti accounting software


free ppt templates


Ashton Kutcher Modeling


compact PLL software

Computer program

Computer programs (also software programs, or just programs) are instructions for a computer.[1] A computer requires programs to function. Moreover, a computer program does not run unless its instructions are executed by a central processor;[2] however, a program may communicate an algorithm to people without running. Computer programs are usually executable programs or the source code from which executable programs are derived (e.g., compiled).

Computer source code is often written by professional computer programmers. Source code is written in a programming language that usually follows one of two main paradigms: imperative or declarative programming. Source code may be converted into an executable file (sometimes called an executable program or a binary) by a compiler. Alternatively, computer programs may be executed by a central processing unit with the aid of an interpreter, or may be embedded directly into hardware.

Computer programs may be categorized along functional lines: system software and application software. And many computer programs may run simultaneously on a single computer, a process known as multitasking.


Programming
Computer programming is the iterative process of writing or editing source code. Editing source code involves testing, analyzing, and refining, and sometimes coordinating with other programmers on a jointly developed program. A person who practices this skill is referred to as a computer programmer or software developer. The sometimes lengthy process of computer programming is usually referred to as software development. The term software engineering is becoming popular as the process is seen as an engineering discipline.


Paradigms
Computer programs can be categorized by the programming language paradigm used to produce them. Two of the main paradigms are imperative and declarative.

Programs written using an imperative language specify an algorithm using declarations, expressions, and statements.[3] A declaration associates a variable name with a datatype. For example: var x: integer; . An expression yields a value. For example: 2 + 2 yields 4. Finally, a statement might assign an expression to a variable or use the value of a variable to alter the program's control flow. For example: x := 2 + 2; if x = 4 then do_something(); One criticism of imperative languages is the side-effect of an assignment statement on a class of variables called non-local variables.[4]

Programs written using a declarative language specify the properties that have to be met by the output and do not specify any implementation details. Two broad categories of declarative languages are functional languages and logical languages. The principle behind functional languages (like Haskell) is to not allow side-effects, which makes it easier to reason about programs like mathematical functions.[4] The principle behind logical languages (like Prolog) is to define the problem to be solved — the goal — and leave the detailed solution to the Prolog system itself.[5] The goal is defined by providing a list of subgoals. Then each subgoal is defined by further providing a list of its subgoals, etc. If a path of subgoals fails to find a solution, then that subgoal is backtracked and another path is systematically attempted.

The form in which a program is created may be textual or visual. In a visual language program, elements are graphically manipulated rather than textually specified.


Compilation or interpretation
A computer program in the form of a human-readable, computer programming language is called source code. Source code may be converted into an executable image by a compiler or executed immediately with the aid of an interpreter.

Compiled computer programs are commonly referred to as executables, binary images, or simply as binaries — a reference to the binary file format used to store the executable code. Compilers are used to translate source code from a programming language into either object code or machine code. Object code needs further processing to become machine code, and machine code is the Central Processing Unit's native code, ready for execution.

Interpreted computer programs are either decoded and then immediately executed or are decoded into some efficient intermediate representation for future execution. BASIC, Perl, and Python are examples of immediately executed computer programs. Alternatively, Java computer programs are compiled ahead of time and stored as a machine independent code called bytecode. Bytecode is then executed upon request by an interpreter called a virtual machine.

The main disadvantage of interpreters is computer programs run slower than if compiled. Interpreting code is slower than running the compiled version because the interpreter must decode each statement each time it is loaded and then perform the desired action. On the other hand, software development may be quicker using an interpreter because testing is immediate when the compilation step is omitted. Another disadvantage of interpreters is the interpreter must be present on the computer at the time the computer program is executed. By contrast, compiled computer programs need not have the compiler present at the time of execution.

No properties of a programming language require it to be exclusively compiled or exclusively interpreted. The categorization usually reflects the most popular method of language execution. For example, BASIC is thought of as an interpreted language and C a compiled language, despite the existence of BASIC compilers and C interpreters. Some systems use Just-in-time compilation (JIT) whereby sections of the source are compiled 'on the fly' and stored for subsequent executions.


Self-modifying programs
A computer program in execution is normally treated as being different from the data the program operates on. However, in some cases this distinction is blurred when a computer program modifies itself. The modified computer program is subsequently executed as part of the same program. Self-modifying code is possible for programs written in Machine code, Assembler, Lisp, COBOL, PL/1 and Prolog and probably many others. Sometimes self modification is used as a form of dynamic optimization where the code becomes more efficient through caching or similar techniques. The technique is also often used to nullify all overhead of already embedded debugging code after a 'one time' test decides that debugging should be 'switched off' for the run. Early mainframe operating systems allowed program overlays as a normal practice for application programs, to conserve memory.


Execution and storage
Typically, computer programs are stored in non-volatile memory until requested either directly or indirectly to be executed by the computer user. Upon such a request, the program is loaded into random access memory, by a computer program called an operating system, where it can be accessed directly by the central processor. The central processor then executes ("runs") the program, instruction by instruction, until termination. A program in execution is called a process.[6] Termination is either by normal self-termination or by error — software or hardware error.


Embedded programs

The microcontroller on the right of this USB flash drive is controlled with embedded firmware.
Some computer programs are embedded into hardware. A stored-program computer requires an initial computer program stored in its read-only memory to boot. The boot process is to identify and initialize all aspects of the system, from CPU registers to device controllers to memory contents.[7] Following the initialization process, this initial computer program loads the operating system and sets the program counter to begin normal operations. Independent of the host computer, a hardware device might have embedded firmware to control its operation. Firmware is used when the computer program is rarely or never expected to change, or when the program must not be lost when the power is off.[8]


Manual programming

Switches for manual input on a Data General Nova 3
Computer programs historically were manually input to the central processor via switches. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also historically were manually input via paper tape or punched cards. After the medium was loaded, the starting address was set via switches and the execute button pressed.[9]


Automatic program generation
Generative programming is a style of computer programming that creates source code through generic classes, prototypes, templates, aspects, and code generators to improve programmer productivity. Source code is generated with programming tools such as a template processor or an Integrated Development Environment. The simplest form of source code generator is a macro processor, such as the C preprocessor, which replaces patterns in source code according to relatively simple rules.

Software engines output source code or markup code that simultaneously become the input to another computer process. The analogy is that of one process driving another process, with the computer code being burned as fuel. Application servers are software engines that deliver applications to client computers. For example, a Wiki is an application server that allows users to build dynamic content assembled from articles. Wikis generate HTML, CSS, Java, and Javascript which are then interpreted by a web browser.


Simultaneous execution
Many operating systems support multitasking which enables many computer programs to appear to be running simultaneously on a single computer. Operating systems may run multiple programs through process scheduling — a software mechanism to switch the CPU among processes frequently so that users can interact with each program while it is running.[10] Within hardware, modern day multiprocessor computers or computers with multicore processors may run multiple programs.[11]


Functional categories
Computer programs may be categorized along functional lines. These functional categories are system software and application software. System software includes the operating system which couples the computer's hardware with the application software.[12] The purpose of the operating system is to provide an environment in which application software executes in a convenient and efficient manner. In addition to the operating system, system software includes utility programs that help manage and tune the computer. If a computer program is not system software then it is application software. Application software includes middleware, which couples the system software with the user interface. Application software also includes utility programs that help users solve application problems, like the need for sorting.

NOTE:

Thermal Imaging Software


Numerical Modeling Software


Calling System Software


acedemic discount software


all my inventory


all jet certification


Anti-slip Chemical Solution


any bulk inventory


apparel wholesale software


auto dealer templates


auto pragramming software


authoring memory software


asia discount airlines


authorized agent application


automated shipping solution


attendance buster software


Refrigerant Management Software


Quarrying Industry Software


Machinery Manufacturing Software


Security Camera Software


OPC Server Software


Worker Compensation Software


auto buyer programs


Fiscal Declaration Accountancy


Securities Lending Software


Offshore Power Generation


Materials Database Software


THE ABSOLUTE FORCE


Metering System Software


Electronic Assembly Software

System software

System software is any computer software which manages and controls computer hardware so that application software can perform a task. Operating systems, such as Microsoft Windows, Mac OS X or Linux, are prominent examples of system software.

System software is software that basically makes the computer work. Examples beside operation systems are anti-virus software, communication software and printer drivers. Without the system software the computer doesn't work. The software that allows you to do things like create text documents, control stocks, produce music DVD's is called application software.[1]

In general application software are programs that enable the end-user to perform specific, productive tasks, such as word processing or image manipulation. System software performs tasks like transferring data from memory to disk, or rendering text onto a display device.


Types of system software
System Software can be classified as operating system and language processors.

Operating system creates an interface between user and the system hardware. Language processors are those which help to convert computer language (Assembly and high level Languages) to machine level language.

The example of language processors are assemblers, Compilers and interpreters.

System software helps run the computer hardware and computer system. It includes operating systems, device drivers, diagnostic tools, servers, windowing systems, utilities and more. The purpose of systems software is to insulate the applications programmer as much as possible from the details of the particular computer complex being used, especially memory and other hardware features, and such accessory devices as communications, printers, readers, displays, keyboards, etc.

NOTE:

dell e310 motherboard


expresscard 54 memory


datastation portable harddisk


delonghi trn1015t solaris


big driving force


Free Metronome Programs


Marking System Software


9600gt quadro bios


Fatigue Testing Software


Utility Control Software


Laboratory System Software


Ophthalmic Billing Software


Operator Interface Software


Digital Display Software


Design Verification Software


apparel cad software


apple 3rd generation


broker dealer application


Technical Graphics Software


Telephone Marketing Software


Fault Finding Software


Realtor Calculations Software


Tape Duplicating Software


Legacy Transformation Software


Recorders, Cassette Data


User Provisioning Software


Warranty Management Software


Economic Evaluation Software


Tachograph Analysis Software


Digital Certificate Software

BIOS

In computing, BIOS is an acronym that stands either for the Basic Input/Output System or for Built In Operating System.[1] The latter term dates from the late 1970s and early 1980s when the PCs of the day normally contained a comprehensive operating system in their ROMs. The latter acronym is still much used in conjunction with gadgetry containing dedicated computers, such as modern cameras, white-wear, etc. The underlying technology is very similar, being little more than a matter of complexity.

BIOS refers, in part, to the firmware code (a type of boot loader) run by a PC when first powered on. The primary function of the BIOS is to identify and initialize system component hardware (such as the video display card, hard disk, and floppy disk) and some other hardware devices. This is to prepare the machine into a known low capability state, so other software programs stored on various media can be loaded, executed, and given control of the PC.[2] This process is known as booting, or booting up, which is short for bootstrapping.

The BIOSes of IBM PC class machines are coded programs embedded on a chip that recognize and control various devices that make up x86 personal computers, and provide a small library of basic Input/Output functions that can be called to operate and control the peripherals such as the keyboard, primitive (800 x 600) display functions and so forth.

Computers designed to run Windows ME or Windows 2000, or later, supersede this basic monitor functionality by taking over direct control of the interrupt table and replacing the monitor routines with faster and more robust low-level modules that, unlike the BIOS function set, are re-entrant. Various BIOS functions in ROM were left in control in earlier Windows versions, and the BIOS only comes into play today in the alternate shell Cmd.exe, or if the machine is booted into a legacy DOS version.


Terminolgy
The term first appeared in the CP/M operating system, describing the part of CP/M loaded during boot time that interfaced directly with the hardware (CP/M machines usually had a simple boot loader in ROM, and nothing else). Most versions of DOS have a file called "IBMBIO.COM" or "IO.SYS" that is analogous to the CP/M disk BIOS. The term was also known as Binary Input/Output System and Basic Integrated Operating System.

Among other classes of computers, the generic terms boot monitor, boot loader or boot ROM were commonly used. Some Sun and Macintosh PowerPC computers used Open Firmware for this purpose. There are a few alternatives for Legacy BIOS in the x86 world: Extensible Firmware Interface, Open Firmware (used on the OLPC XO-1) and coreboot.

ROM with BIOS
Prior to the early 1990s, BIOSes were stored in ROM or PROM chips, which could not be altered by users. As its complexity and need for updates grew, and re-programmable parts became more available, BIOS firmware was most commonly stored on EEPROM or flash memory devices. According to Robert Braver, the president of the BIOS manufacturer Micro Firmware, Flash BIOS chips became common around 1995 because the electrically erasable PROM (EEPROM) chips are cheaper and easier to program than standard erasable PROM (EPROM) chips. PROM chips may be erased by prolonged exposure to ultraviolet light, which accessed the chip via the window. Chip manufacturers use PROM blasters to reprogram EPROM chips. EEPROM chips come with the additional feature of allowing a BIOS reprogramming via higher-than-normal amounts of voltage.[3] BIOS versions are upgraded to take advantage of newer versions of hardware and to correct bugs in previous revisions of BIOSes.

The first flash chips attached to the ISA bus. Starting in 1997, the BIOS flash moved to the LPC bus, a functional replacement for ISA, following a new standard implementation known as "firmware hub" (FWH). Most BIOS revisions created in 1995 and nearly all BIOS revisions in 1997 supported the year 2000.[5] In 2006, the first systems supporting a Serial Peripheral Interface (SPI) appeared, and the BIOS flash moved again.

The size of the BIOS, and the capacities of the ROM, EEPROM and other media it may be stored on, has increased over time as new features have been added to the code; BIOS versions now exist with sizes up to 8 megabytes. Some modern motherboards are including even bigger NAND Flash ROM ICs on board which are capable of storing whole compact operating system distribution like some Linux distributions. For example, some recent ASUS motherboards including SplashTop Linux embedded into their NAND Flash ROM ICs.


BIOS chip vulnerabilities
EEPROM chips are advantageous because they can be easily updated by the user; hardware manufacturers frequently issue BIOS updates to upgrade their products, improve compatibility and remove bugs. However, this advantage had the risk that an improperly executed or aborted BIOS update could render the computer or device unusable. To avoid these situations, more recent BIOSes use a "boot block"; a portion of the BIOS which runs first and must be updated separately. This code verifies if the rest of the BIOS is intact (using hash checksums or other methods) before transferring control to it. If the boot block detects any corruption in the main BIOS, it will typically warn the user that a recovery process must be initiated by booting from removable media (floppy, CD or USB memory) so the user can try flashing the BIOS again. Some motherboards have a backup BIOS (sometimes referred to as DualBIOS boards) to recover from BIOS corruptions. In 2007, Gigabyte began offering motherboards with a QuadBIOS recovery feature.[6]


Virus attacks
There was at least one virus named CIH (also known as "Chernobyl Virus") which was able to erase Flash ROM BIOS content, rendering computer systems unusable. Affected systems often can't be fixed on their own since they're no longer able to boot. In such scenario Flash ROM IC should be ejected from motherboard and reprogrammed somewhere else. Mass damage from CIH virus has been possible because of most motherboards in the ages of CIH "popularity" used same chip set, Intel TX and most common operating systems like Windows 95 allowed direct hardware access to all programs.

Modern systems are not vulnerable to CIH because of numerous and different chip sets used (which are incompatible with Intel TX chip set), another Flash ROM IC types, there is also extra protections from accidental BIOS rewrites and either boot blocks which are protected from accidental overwrite even more or dual BIOS used so in case of crash, second BIOS getting used. Also all modern operating systems like Windows XP, Windows Vista, Linux just do not allow direct hardware access to usual non-privileged programs. So, as of year 2008 CIH became almost harmless and at very most just bothers users by infecting executable files without being able to cause any real harm and only toggling numerous virus alerts from antivirus software.


Firmware on adapter cards
A computer system can contain several BIOS firmware chips. The motherboard BIOS typically contains code to access fundamental hardware components such as the keyboard, floppy drives, ATA (IDE) hard disk controllers, USB human interface devices, and storage devices. In addition, plug-in adapter cards such as SCSI, RAID, Network interface cards, and video boards often include their own BIOS, complementing or replacing the system BIOS code for the given component.

In some devices that can be used by add-in adapters and actually directly integrated on the motherboard, the add-in ROM may also be stored as separate code on the main BIOS flash chip. It may then be possible to upgrade this "add-in" BIOS (sometimes called an option ROM) separately from the main BIOS code.

Add-in cards usually only require such an add-in BIOS if they:

Need to be used prior to the time that the operating system loads (e.g. they may be used as part of the process which loads (bootstraps) the operating system), and:

Are not sufficiently simple, or generic in operation to be handled by the main BIOS directly

PC operating systems such as DOS, including all DOS-based versions of MS Windows, as well as bootloaders, may continue to make use of the BIOS to handle input and output. However, other modern operating systems will interact with hardware devices directly by using their own device drivers to directly access the hardware. Occasionally these add-in BIOSs are still called by these operating systems, in order to carry out specific tasks such as preliminary device initialization.

To find these memory mapped expansion ROMs during the boot process, PC BIOS implementations scan real memory from 0xC0000 to 0xF0000 on 2 kibibyte boundaries looking for the ROM signature bytes of 55h followed by AAh (0xAA55). For a valid expansion ROM, its signature is immediately followed by a single byte indicating the number of 512-byte blocks it occupies in real memory. The BIOS then jumps to the offset located immediately after this size byte; at which point the expansion ROM code takes over, using the BIOS services to register interrupt vectors for use by post-boot applications and provide a user configuration interface, or display diagnostic information.

There are many methods and utilities for dumping the contents of various motherboard BIOS and expansion ROMs. Under a Microsoft OS, DEBUG can be used to examine 64 KiB segments of memory and save the contents to a file. For UNIX systems the dd command can be used by a user with root privileges: "dd if=/dev/mem bs=1k skip=768 count=256 2>/dev/null strings -n 8".


The BIOS boot specification
If the expansion ROM wishes to change the way the system boots (such as from a network device or a SCSI adapter for which the BIOS has no driver code), it can use the BIOS Boot Specification (BBS) API to register its ability to do so. Once the expansion ROMs have registered using the BBS APIs, the user can select among the available boot options from within the BIOSes user interface. This is why most BBS compliant PC BIOS implementations will not allow the user to enter the BIOS's user interface until the expansion ROMs have finished executing and registering themselves with the BBS API.[citation needed]


Changing role of the BIOS
Some operating systems, for example MS-DOS, rely on the BIOS to carry out most input/output tasks within the PC.[7] A variety of technical reasons makes it inefficient for some recent operating systems written for 32-bit CPUs such as Linux and Microsoft Windows to invoke the BIOS directly. Larger, more powerful, servers and workstations using PowerPC or SPARC CPUs by several manufacturers developed a platform-independent Open Firmware (IEEE-1275), based on the Forth programming language. It is included with Sun's SPARC computers, IBM's RS/6000 line, and other PowerPC CHRP motherboards. Later x86-based personal computer operating systems, like Windows NT, use their own, native drivers which also makes it much easier to extend support to new hardware, while the BIOS still relies on a legacy 16-bit runtime interface. As such, the BIOS was relegated to bootstrapping, at which point the operating system's own drivers can take control of the hardware.

There was a similar transition for the Apple Macintosh, where the system software originally relied heavily on the ToolBox—a set of drivers and other useful routines stored in ROM based on Motorola's 680x0 CPUs. These Apple ROMs were replaced by Open Firmware in the PowerPC Macintosh, then EFI in Intel Macintosh computers.

Later BIOS took on more complex functions, by way of interfaces such as ACPI; these functions include power management, hot swapping and thermal management. However BIOS limitations (16-bit processor mode, only 1 MiB addressable space, PC AT hardware dependencies, etc.) were seen as clearly unacceptable for the newer computer platforms. Extensible Firmware Interface (EFI) is a specification which replaces the runtime interface of the legacy BIOS. Initially written for the Itanium architecture, EFI is now available for x86 and x86-64 platforms; the specification development is driven by The Unified EFI Forum, an industry Special Interest Group.

Linux has supported EFI via the elilo boot loader. The Open Source community increased their effort to develop a replacement for proprietary BIOSes and their future incarnations with an open sourced counterpart through the coreboot and OpenBIOS/Open Firmware projects. AMD provided product specifications for some chipsets, and Google is sponsoring the project. Motherboard manufacturer Tyan offers coreboot next to the standard BIOS with their Opteron line of motherboards. MSI and Gigabyte have followed suit with the MSI K9ND MS-9282 and MSI K9SD MS-9185 resp. the M57SLI-S4 models.


The BIOS business
The vast majority of PC motherboard suppliers license a BIOS "core" and toolkit from a commercial third-party, known as an "independent BIOS vendor" or IBV. The motherboard manufacturer then customizes this BIOS to suit its own hardware. For this reason, updated BIOSes are normally obtained directly from the motherboard manufacturer.

Major BIOS vendors include American Megatrends (AMI), Insyde Software, Phoenix Technologies (which bought Award Software International in 1998 and General Software in 2008) and Byosoft (which is a Chinese firmware company located at Nanjing, China).

NOTE:

external dvd scsi


engine power components


emachine t3256 motherboard


dimension 8200 memory


dell e1505 motherboard


dell notebook motherboard


html display components


compress atmosphere processor


computer interface modules


dell 8100 ram


corning tm reg


frp auto components


gt s amd


ddr400 memory ram


clutch assembly components


emachines t6216 motherboard


dell e520 chipset


fly pen memory


dell 2400 ram


copper food processor


electric loudspeaker processor


dimension 1100 motherboard


e machine processor


d sub vga


embedded flash memory


du seigneur des


embedded terminal motherboard


copier drum components


ibm laptop motherboard


export nf2 motherboard

Motherboard

A motherboard is the central printed circuit board (PCB) in some complex electronic systems, such as a modern computer or laptop. It is also known as a mainboard, system board, and, on Apple computers, a logic board.[1] It is also sometimes casually shortened to mobo.[2]

Most motherboards produced today are designed for IBM-compatible computers, currently which hold over 95% of the global personal computer market.[3] A motherboard, like a backplane, provides the electrical connections by which the other components of the system communicate, but unlike a backplane, it also contains the central processing unit, other subsystems and devices, such as the chip set.

Motherboards are also used in robots and in many other electronics devices.


Overview
An Acer E360 motherboard made by Foxconn, from 2005, with a large number of integrated peripherals. This board's nForce3 chipset lacks a traditional northbridge.
A typical desktop computer is built with the microprocessor, main memory, and other essential components on the motherboard. Other components such as external storage, controllers for video display and sound, and peripheral devices are typically attached to the motherboard via edge connectors and cables, although in modern computers it is increasingly common to integrate these "peripherals" into the motherboard.

All of the basic circuitry and components required for a computer to function are on the motherboard or are connected to it with a cable. The most important component on a motherboard is the chipset. In modern x86 motherboards, it often consists of two components or chips known as the northbridge and southbridge, though they may also be integrated into a single component. These chips determine, to an extent, the features and capabilities of the motherboard.

Modern motherboards include, at a minimum:

sockets (or slots) in which one or more microprocessors are installed[4]
slots into which the system's main memory is installed (typically in the form of DIMM modules containing DRAM chips)
a chipset which forms an interface between the CPU's front-side bus, main memory, and peripheral buses
non-volatile memory chips (usually Flash ROM in modern motherboards) containing the system's firmware or BIOS
a clock generator which produces the system clock signal to synchronize the various components
slots for expansion cards (these interface to the system via the buses supported by the chipset)
power connectors flickers, which receive electrical power from the computer power supply and distribute it to the CPU, chipset, main memory, and expansion cards.[5]

The Octek Jaguar V motherboard from 1993.[6] This board has 6 ISA slots but few onboard peripherals, as evidenced by the lack of external connectors.
Additionally, nearly all motherboards include logic and connectors to support commonly-used input devices, such as PS/2 connectors for a mouse and keyboard. Early personal computers such as the Apple II or IBM PC included only this minimal peripheral support on the motherboard. Occasionally video interface hardware was also integrated into the motherboard; for example on the Apple II, and rarely on IBM-compatible computers such as the IBM PC Jr. Additional peripherals such as disk controllers and serial ports were provided as expansion cards.

Given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heatsinks and mounting points for fans to dissipate excess heat.

Integrated peripherals
Diagram of a modern motherboard, which supports many on-board peripheral functions as well as several expansion slots.
With the steadily declining costs and size of integrated circuits, it is now possible to include support for many peripherals on the motherboard. By combining many functions on one PCB, the physical size and total cost of the system may be reduced; highly-integrated motherboards are thus especially popular in small form factor and budget computers.

For example, the ECS RS485M-M,[7] a typical modern budget motherboard for computers based on AMD processors, has on-board support for a very large range of peripherals:

disk controllers for a floppy disk drive, up to 2 PATA drives, and up to 6 SATA drives (including RAID 0/1 support)
integrated ATI Radeon graphics controller supporting 2D and 3D graphics, with VGA and TV output
integrated sound card supporting 8-channel (7.1) audio and S/PDIF output
fast Ethernet network controller for 10/100 Mbit networking
USB 2.0 controller supporting up to 12 USB ports
IrDA controller for infrared data communication (e.g. with an IrDA enabled Cellular Phone or Printer)
temperature, voltage, and fan-speed sensors that allow software to monitor the health of computer components
Expansion cards to support all of these functions would have cost hundreds of dollars even a decade ago, however as of April 2007[update] such highly-integrated motherboards are available for as little as $30 in the USA.


Peripheral card slots
A typical motherboard of 2007 will have a different number of connections depending on its standard. A standard ATX motherboard will typically have 1x PCI-E 16x connection for a graphics card, 2x PCI slots for various expansion cards and 1x PCI-E 1x which will eventually supersede PCI.

A standard Super ATX motherboard will have 1x PCI-E 16x connection for a graphics card. It will also have a varying number of PCI and PCI-E 1x slots. It can sometimes also have a PCI-E 4x slot. This varies between brands and models.

Some motherboards have 2x PCI-E 16x slots to allow more than 2 monitors without special hardware or to allow use of a special graphics technology called SLI (for Nvidia) and Crossfire (for ATI). These allow 2 graphics cards to be linked together to allow better performance in intensive graphical computing tasks such as gaming and video editing.

As of 2007[update], virtually all motherboards come with at least 4x USB ports on the rear with at least 2 connections on the board internally for wiring additional front ports that are built into the computers case. Ethernet is also included now. This is a standard networking cable for connecting the computer to a network or a modem. A sound chip is always included on the motherboard to allow sound to be output without the need for any extra components. This allows computers to be far more multimedia based than before. Cheaper machines now often have their graphics chip built into the motherboard rather than a separate card.


Temperature and reliability
Motherboards are generally air cooled with heat sinks often mounted on larger chips, such as the northbridge, in modern motherboards. If the motherboard is not cooled properly, then this can cause the motherboard to crash. Passive cooling, or a single fan mounted on the power supply, was sufficient for many desktop computer CPUs until the late 1990s; since then, most have required CPU fans mounted on their heatsinks, due to rising clock speeds and power consumption. Most motherboards have connectors for additional case fans as well. Newer motherboards have integrated temperature sensors to detect motherboard and CPU temperatures, and controllable fan connectors which the BIOS or operating system can use to regulate fan speed. Some higher-powered computers (which typically have high-performance processors and large amounts of RAM, as well as high-performance video cards) use a water-cooling system instead of many fans.

Some small form factor computers and home theater PCs designed for quiet and energy-efficient operation boast fan-less designs. This typically requires the use of a low-power CPU, as well as careful layout of the motherboard and other components to allow for heat sink placement.

A 2003 study[8] found that some spurious computer crashes and general reliability issues, ranging from screen image distortions to I/O read/write errors, can be attributed not to software or peripheral hardware but to aging capacitors on PC motherboards. Ultimately this was shown to be the result of a faulty electrolyte formulation.

Motherboards use electrolytic capacitors to filter the DC power distributed around the board. These capacitors age at a temperature-dependent rate, as their water based electrolytes slowly evaporate. This can lead to loss of capacitance and subsequent motherboard malfunctions due to voltage instabilities. While most capacitors are rated for 2000 hours of operation at 105 °C,[10] their expected design life roughly doubles for every 10 °C below this. At 45 °C a lifetime of 15 years can be expected. This appears reasonable for a computer motherboard, however many manufacturers have delivered substandard capacitors,[citation needed] which significantly reduce life expectancy. Inadequate case cooling and elevated temperatures easily exacerbate this problem. It is possible, but tedious and time-consuming, to find and replace failed capacitors on PC motherboards; it is less expensive to buy a new motherboard than to pay for such a repair.[citation needed]


Form factor

microATX form factor motherboard
Motherboards are produced in a variety of sizes and shapes ("form factors"), some of which are specific to individual computer manufacturers. However, the motherboards used in IBM-compatible commodity computers have been standardized to fit various case sizes. As of 2007[update], most desktop computer motherboards use one of these standard form factors—even those found in Macintosh and Sun computers which have not traditionally been built from commodity components.

Laptop computers generally use highly integrated, miniaturized, and customized motherboards. This is one of the reasons that laptop computers are difficult to upgrade and expensive to repair. Often the failure of one laptop component requires the replacement of the entire motherboard, which is usually more expensive than a desktop motherboard due to the large number of integrated components.


Nvidia SLI and ATI Crossfire
Nvidia SLI and ATI Crossfire technology allows 2 or more of the same series graphics cards to be linked together to allow a faster graphics experience. Almost all medium to high end Nvidia cards and most high end ATI cards support the technology.

They both require compatible motherboards. There is an obvious need for 2x PCI-E 16x slots to allow 2 cards to be inserted into the computer. The same function can be acheived in 650i motherboards by NVIDIA, with a pair of x8 slots. Originally, tri-Crossfire was achieved at 8x speeds with 2 16x slots and 1 8x slot albeit at a slower speed. ATI opened the technology up to Intel in 2006 and such all new Intel chipsets support Crossfire.

SLI is a little more proprietary in its needs. It requires a motherboard with Nvidia's own NForce chipset series to allow it to run.

It is important to note that SLI and Crossfire will not usually scale to 2x the performance of a single card when using a dual setup. They also do not double the effective amount of VRAM or memory bandwidth.


History
Prior to the advent of the microprocessor, a computer was usually built in a card-cage case or mainframe with components connected by a backplane consisting of a set of slots themselves connected with wires; in very old designs the wires were discrete connections between card connector pins, but printed-circuit boards soon became the standard practice. The central processing unit, memory and peripherals were housed on individual printed circuit boards which plugged into the backplane.

During the late 1980s and 1990s, it became economical to move an increasing number of peripheral functions onto the motherboard (see above). In the late 1980s, motherboards began to include single ICs (called Super I/O chips) capable of supporting a set of low-speed peripherals: keyboard, mouse, floppy disk drive, serial ports, and parallel ports. As of the late 1990s, many personal computer motherboards support a full range of audio, video, storage, and networking functions without the need for any expansion cards at all; higher-end systems for 3D gaming and computer graphics typically retain only the graphics card as a separate component.

The early pioneers of motherboard manufacturing were Micronics, Mylex, AMI, DTK, Hauppauge, Orchid Technology, Elitegroup, DFI, and a number of Taiwan-based manufacturers.

Popular personal computers such as the Apple II and IBM PC had published schematic diagrams and other documentation which permitted rapid reverse-engineering and third-party replacement motherboards. Usually intended for building new computers compatible with the exemplars, many motherboards offered additional performance or other features and were used to upgrade the manufacturer's original equipment.

The term mainboard is archaicly applied to devices with a single board and no additional expansions or capability. In modern terms this would include embedded systems, and controlling boards in telvisions, washing machines etc. A motherboard specifically refers to a printed circuit with the capability to add/extend its performance/capabailities with the addition of "daughterboards".


Bootstrapping using the BIOS
Motherboards contain some non-volatile memory to initialize the system and load an operating system from some external peripheral device. Microcomputers such as the Apple II and IBM PC used read-only memory chips, mounted in sockets on the motherboard. At power up the central processor would load its program counter with the address of the boot ROM and start executing ROM instructions displaying system information on the screen and running memory checks, which would in turn start loading memory from an external or peripheral device (disk drive) if one isn't available then the computer can perform tasks from other memory stores or displays an error message depending on the model and design of the computer and version of the bios.

Most modern motherboard designs use a BIOS, stored in an EEPROM chip soldered to the motherboard, to bootstrap the motherboard. (Socketed BIOS chips are widely used, also.) By booting the motherboard, the memory, circuitry, and peripherals are tested and configured.

NOTE:

five regular modules


combination air processor


crucial pc4200 memory


emachine t2885 motherboard


clivia miniata reg


ecs nforce motherboards


e machine motherboards


electric coil inserter


emachines t2042 motherboard


encapsulated power modules


finest music components


cuw am motherboard


i gb ram


custom machined components


clamping device components


fn 1dlt delta


dell 4600 motherboard


fic au31 motherboard


ibm thinkpad motherboards


hydrant system components


hdd and memory


emachines t6528 motherboard


sdram - memory


emachines t3256 motherboard


dvd writer scsi


high density ddr


ddr 3 memory


dell e520 memory


clutch for hyundai


cisco interface modules

Intermediate switch

A DPDT switch has six connections, but since polarity reversal is a very common usage of DPDT switches, some variations of the DPDT switch are internally wired specifically for polarity reversal. These crossover switches only have four terminals rather than six. Two of the terminals are inputs and two are outputs. When connected to a battery or other DC source, the 4-way switch selects from either normal or reversed polarity. Intermediate switches are also an important part of multiway switching systems with more than two switches (see next section).


Multiway switching
Multiway switching is a method of connecting switches in groups so that any switch can be used to connect or disconnect the load. This is most commonly done with lighting.


Two locations
Switching a load on or off from two locations (for instance, turning a light on or off from either end of a flight of stairs) requires two SPDT switches. There are two basic methods of wiring to achieve this, and another not recommended.

In the first method, mains is fed into the common terminal of one of the switches; the switches are then connected through the L1 and L2 terminals (swapping the L1 and L2 terminals will just make the switches work the other way round), and finally a feed to the light is taken from the common of the second switch. A connects to B or C, D connects to B or C; the light is on if A connects to D, i.e. if A and D both connect to B or both connect to C.

The second method is to join the three terminals of one switch to the corresponding terminals on the other switch and take the incoming supply and the wire out to the light to the L1 and L2 terminals. Through one switch A connects to B or C, through the other also to B or C; the light is on if B connects to C, i.e. if A connects to B with one switch and to C with the other.

If the mains and the load are connected to the system of switches at one of them, then in both methods we need three wires between the two switches. In the first method one of the three wires just has to pass through the switch, which tends to be less convenient than being connected. When multiple wires come to a terminal they can often all be put directly in the terminal. When wires need to be joined without going to a terminal a crimped joint, piece of terminal block, wirenut or similar device must be used and the bulk of this may require use of a deeper backbox.


Using the first method, there are four possible combinations of switch positions: two with the light on and two with the light off. N.B.

An unrecommended method

If there is a hot (a unique phase) and a neutral wire in both switches and just one wire between them where the light is connected (as in the picture), you can then solve the two way switch problem easily: just plug the hot in the top from switch, the neutral in the bottom from switch and the wire that goes to the light in the middle from the switch. This in both switches. Now you have a fully functional two way switch.

This works like the first method above: there are four possibilities and just in two of them there is a hot and a neutral connected in the poles of the light. In the other ones, both poles are neutral or hot and then no current flows because the potential difference is zero.

The advantage of this method is that it uses just one wire to the light, having a hot and neutral in both switches..

The reason why this is not recommended is that the light socket pins may still be hot even with the light off, which poses a risk when changing a bulb. Another problem with this method is that in both switches there will be hot and neutral wires entering a single switch, which can lead to a short circuit in the event of switch failure, unlike the other methods.

This method is in defiance of the National Electrical Code (USA) and the Canadian Electrical Code. In nearly any and all applications, neutral conductors should never be switched. Not only is this a shock hazard due to mistakenly believing that a hot conductor is switched off; it is also a fire hazard and can destroy sensitive equipment due to excessive and unbalanced current flowing on hot conductors that would otherwise flow back to ground on the neutral conductor.


More than two locations
For more than two locations, the two cores connecting the L1 and L2 of the switches must be passed through an intermediate switch (as explained above) wired to swap them over. Any number of intermediate switches can be inserted, allowing for any number of locations.



As mentioned above, the above circuit can be extended by using multiple 4-way switches between the 3-way switches to extend switching ability to any number of locations.


Power switching
When a switch is designed to switch significant power, the transitional state of the switch as well as the ability to stand continuous operating currents must be considered. When a switch is on its resistance is near zero and very little power is dropped in the contacts; when a switch is in the off state its resistance is extremely high and even less power is dropped in the contacts. However when the switch is flicked the resistance must pass through a state where briefly a quarter (or worse if the load is not purely resistive) of the load's rated power is dropped in the switch.

For this reason, most power switches (most light switches and almost all larger switches) have spring mechanisms in them to make sure the transition between on and off is as short as possible regardless of the speed at which the user moves the rocker.

Power switches usually come in two types. A momentary on-off switch (such as on a laser pointer) usually takes the form of a button and only closes the circuit when the button is depressed. A regular on-off switch (such as on a flashlight) has a constant on-off feature. Dual-action switches incorporate both of these features.


Inductive loads
When a strongly inductive load such as an electric motor is switched on input surge current which may be several times larger than the steady current flows. When switched off, the current cannot drop instantaneously to zero; a spark will jump across the opening contacts. Switches for inductive loads must be rated to handle these cases. The spark will cause electromagnetic interference if not suppressed; a snubber network of a resistor and capacitor in series will quell the spark. Exact values can be optimised for the particular application, but for many cases a 100 ohm resistor in series with a 100 nanofarad capacitor will do.


Up or down
Domestic light switches are generally moved up to switch on in the USA, but down in most of Europe and Australia.

The reason for the difference remains a bit of a mystery. A few hypotheses are often put forward, (for example in the USA if the switch spring fails it cannot cause the switch to accidentally turn on, in other words it will fail safe), but none have been validated. Since there is no significant technical reason for either preference, the standards likely developed due to chance and some degree of cultural isolation.

In countries prone to earthquakes, such as Japan, most switches rock sideways to prevent the switch from inadvertently being turned on or off by falling objects.


Contact bounce
Contact bounce (also called chatter) is a common problem with mechanical switches and relays. Switch and relay contacts are usually made of springy metals that are forced into contact by an actuator. When the contacts strike together, their momentum and elasticity act together to cause bounce. The result is a rapidly pulsed electrical current instead of a clean transition from zero to full current. The waveform is then further modified by the parasitic inductances and capacitances in the switch and wiring, resulting in a series of damped sinusoidal oscillations. This effect is usually unnoticeable in AC mains circuits, where the bounce happens too quickly to affect most equipment, but causes problems in some analogue and logic circuits that respond fast enough to misinterpret the on-off pulses as a data stream.

Sequential digital logic circuits are particularly vulnerable to contact bounce. The voltage waveform produced by switch bounce usually violates the amplitude and timing specifications of the logic circuit. The result is that the circuit may fail, due to problems such as metastability, race conditions, runt pulses and glitches.

There are a number of techniques for debouncing (mitigating the effects of switch bounce). They can be split into wet contacts, timing based techniques and Hysteresis based techniques.


Wet contacts
Mercury wetted switch contacts do not suffer from bounce, as once the connection is made the mercury keeps the contact conducting during mechanical bounce.

Mercury wetted switches are not a popular option today, primarily due to mercury's toxicity.


Timing based

Resistor and capacitor
If an on/off switch is used with a pull up (or pull down) resistor and a single capacitor is placed over the switch (or across the resistor, but this can cause nasty spikes of current on the power supply lines) then when the switch is closed (generally pressed) the capacitor will almost instantly discharge through the switch. But when the switch is opened (generally released) the capacitor takes some time to recharge. Therefore contact bounce will have negligible effect on the output. The slow edges can be cleaned up with a Schmitt trigger if necessary. This method has the advantage of fast response to the initial press but the current surges through the switch may be undesirable. Other RC based systems are also possible with various responses and such systems are probably the easiest method when constructing with simple logic gates and discrete components.


State machines and software
A finite state machine or software running on a CPU can be designed to wait a fixed number of clock cycles after any transition before registering another one. This provides a cheap option for debouncing when a microprocessor, microcontroller or gate array is already in use but is unlikely to be worthwhile if constructing with single logic gates CLPD's.


Sampling
Arguably the simplest way to debounce a switch transition, either in hardware or software, is merely to sample the switch state at intervals longer than any possible train of bounces. This guarantees that any bouncing affects at most one sample, which must agree either with the previous sample or with the following sample. Either case results in only one clean transition in the sampled data. A simple hardware implementation is a single D-type flip-flop clocked at a suitable rate; and software sampling is easy to program. For most switches, a suitable sampling rate can easily introduce less latency than a human being can perceive.


Hysteresis
Alternatively, it is possible to build in hysteresis by making the position where a press is detected separate from that where a release is detected. As long as the bounces are small enough not to take the switch between these positions, bounce problems will be eliminated. Hysteresis can be mechanical or electronic (e.g. a Schmitt trigger).


Changeover switch
A changeover switch provides two distinct events, the making of one contact and the breaking of the other. These can be used to feed the inputs of a flip-flop. This way the press will only be detected when the pressed contact is made and the release will only be detected when the released contact is made. When the switch is bouncing around in the middle no change is detected. To get a single logic signal from such a setup a simple SR latch can be used.


Variable resistance
Normal switches are designed to give a hard on-off but it is also possible to design one that varies more gradually between the hard-on and hard-off states. This keeps the output changes caused by bouncing small. Then by feeding the output to a Schmitt trigger the effect of those bounce based changes can be eliminated.

NOTE:

cribs by delta


color film processor


dual 370 motherboard


hdd, sdram, vga


ep 8kha motherboard


cpu and peripherals


dolby digital processor


hobart food processor


heat pipe modules


dfp to vga


cpu - celeron


ibm 49p1599 motherboard


electronics \46 components


emachine t2385 motherboard


desktop pc pentium


compaq 2100 ram


enclosures & components


dell e521 motherboard


dimension 3000 ram


dimension 5150 motherboard


ddr2 256m memory


emachines t3516 motherboard


emachine t1842 motherboard


ddr pc3200 memory


commercial aircraft components


heavy duty modules


ecs s-775 motherboards


hdd, sdram,cpu, vga


fic vg33 motherboard


ibm aptiva motherboard