2008年10月30日星期四

Windows Server 2003

Windows Server 2003 (also referred to as Win2K3) is a server operating system produced by Microsoft. Introduced on 24 April 2003 as the successor to Windows 2000 Server, it is considered by Microsoft to be the cornerstone of its Windows Server System line of business server products[citation needed]. An updated version, Windows Server 2003 R2 was released to manufacturing on 6 December 2005. Its successor, Windows Server 2008, was released on 4 February 2008.

According to Microsoft, Windows Server 2003 is more scalable and delivers better performance than its predecessor, Windows 2000.

Overview
Released on 24 April 2003,[3] Windows Server 2003 (which carries the version number 5.2) is the follow-up to Windows 2000 Server, incorporating compatibility and other features from Windows XP. Unlike Windows 2000 Server, Windows Server 2003's default installation has none of the server components enabled, to reduce the attack surface of new machines. Windows Server 2003 includes compatibility modes to allow older applications to run with greater stability. It was made more compatible with Windows NT 4.0 domain-based networking. Incorporating and upgrading a Windows NT 4.0 domain to Windows 2000 was considered difficult and time-consuming, and generally was considered an all-or-nothing upgrade, particularly when dealing with Active Directory.[who?] Windows Server 2003 brought in enhanced Active Directory compatibility, and better deployment support, to ease the transition from Windows NT 4.0 to Windows Server 2003 and Windows XP Professional.

Changes to various services include those to the IIS web server, which was almost completely rewritten to improve performance and security, Distributed File System, which now supports hosting multiple DFS roots on a single server, Terminal Server, Active Directory, Print Server, and a number of other areas. Windows Server 2003 was also the first operating system released by Microsoft after the announcement of its Trustworthy Computing initiative, and as a result, contains a number of changes to security defaults and practices.

The product went through several name changes during the course of development. When first announced in 2000, it was known by its codename, "Whistler Server"; it was then named "Windows 2002 Server" for a brief time in mid-2001, before being renamed "Windows .NET Server" as part of Microsoft's effort to promote its new integrated enterprise and development framework, Microsoft .NET. It was later renamed to "Windows .NET Server 2003". Due to fears of confusing the market about what ".NET" represents and responding to criticism, Microsoft removed .NET from the name during the Release Candidate stage in late-2002. This allowed the name .NET to exclusively apply to the .NET Framework, as previously it had appeared that .NET was just a tag for a generation of Microsoft products.

Editions
Windows Server 2003 comes in a number of editions, each targeted towards a particular size and type of business: See Compare the Editions of Windows Server 2003 for a concise comparison. In general, all variants of Windows Server 2003 have the ability to share files and printers, act as an application server, and host message queues, provide email services, authenticate users, act as an X.509 certificate server, provide LDAP directory services, serve streaming media, and to perform other server-oriented functions.

Windows Small Business Server
SBS includes Windows Server and additional technologies aimed at providing a small business with a complete technology solution. The technologies are integrated to enable small business with targeted solutions such as the Remote Web Workplace, and offer management benefits such as integrated setup, enhanced monitoring, a unified management console, and remote access.

The Standard Edition of SBS includes Windows SharePoint Services for collaboration, Microsoft Exchange server for e-mail, Fax Server, and the Active Directory for user management. The product also provides a basic firewall, DHCP server and NAT router using either two network cards or one network card in addition to a hardware router.

The Premium Edition of SBS includes the above plus Microsoft SQL Server 2000 and Microsoft Internet Security and Acceleration Server 2004.

SBS has its own type of Client Access License (CAL) that is different and costs slightly more than CALs for the other editions of Windows Server 2003. However, the SBS CAL encompasses the user CALs for Windows Server, Exchange Server, SQL Server, and ISA Server, and hence is less expensive than buying all the other CALs individually.

Web Edition
Windows Server 2003, Web Edition is mainly for building and hosting Web applications, Web pages, and XML Web services. It is designed to be used primarily as an IIS 6.0 Web server and provides a platform for rapidly developing and deploying XML Web services and applications that use ASP.NET technology, a key part of the .NET Framework. This edition does not require Client Access Licenses and Terminal Server mode is not included on Web Edition. However, Remote Desktop for Administration is available on Windows Server 2003, Web Edition. Only 10 concurrent file-sharing connections are allowed at any moment. It is not possible to install Microsoft SQL Server and Microsoft Exchange software in this edition. However MSDE and SQL Server 2005 Express are fully supported after service pack 1 is installed. Despite supporting XML Web services and ASP.NET, UDDI cannot be deployed on Windows Server 2003, Web Edition. The .NET Framework version 2.0 is not included with Windows Server 2003, Web Edition, but can be installed as a separate update from Windows Update.

Windows Server 2003 Web Edition supports a maximum of 2 processors with support for a maximum of 2GB of RAM. Additionally, Windows Server 2003, Web Edition cannot act as a domain controller.[8] Additionally, it is the only version of Windows Server 2003 that does not include client number limitation upon Windows update services as it does not require Client Access Licenses.

Standard Edition
Windows Server 2003, Standard Edition is aimed towards small to medium sized businesses. Standard Edition supports file and printer sharing, offers secure Internet connectivity, and allows centralized desktop application deployment. This edition of Windows will run on up to 4 processors with up to 4 GB RAM. 64-bit versions are also available for the x86-64 architecture (AMD64 and EM64T, called collectively x64 by Microsoft). The 64-bit version of Windows Server 2003, Standard Edition is capable of addressing up to 32 GB of RAM and it also supports Non-Uniform Memory Access (NUMA), something the 32-bit version does not do. The 32-bit version is available for students to download free of charge as part of Microsoft's DreamSpark program.

Enterprise Edition
Windows Server 2003, Enterprise Edition is aimed towards medium to large businesses. It is a full-function server operating system that supports up to eight processors and provides enterprise-class features such as eight-node clustering using Microsoft Cluster Server (MSCS) software and support for up to 32 GB of memory through PAE (added with the /PAE boot string). Enterprise Edition also comes in 64-bit versions for the Itanium and x64 architectures. The 64-bit versions of Windows Server 2003, Enterprise Edition are capable of addressing up to 1 TB of memory. Both 32-bit and 64-bit versions support Non-Uniform Memory Access (NUMA). It also provides the ability to hot-add supported hardware. Enterprise Edition is also required to issue custom certificate templates.

Datacenter Edition
Windows Server 2003, Datacenter Edition is designed[9] for infrastructures demanding high security and reliability. Windows Server 2003 is available for x86, Itanium, and x86_64 processors. It supports a maximum of up to 32 processors on 32-bit or 64 processors on 64-bit hardware. 32-bit architecture also limits memory addressability to 128 GB, while the 64-bit versions support up to 2 TB. Windows Server 2003, Datacenter Edition, also allows limiting processor and memory usage on a per-application basis.

Windows Server 2003 Datacenter Edition also supports Non-Uniform Memory Access. If supported by the system, Windows, with help from the system firmware creates a Static Resource Affinity Table that defines the NUMA topology of the system. Windows then uses this table to optimize memory accesses, and provide NUMA awareness to applications, thereby increasing the efficiency of thread scheduling and memory management.

Windows Server 2003, Datacenter Edition has better support for Storage Area Networks (SAN). It features a service which uses Windows sockets to emulate TCP/IP communication over native SAN service providers, thereby allowing a SAN to be accessed over any TCP/IP channel. With this, any application that can communicate over TCP/IP can use a SAN, without any modification to the application.

Windows Server 2003, Datacenter Edition, also supports 8-node clustering. Clustering increases availability and fault tolerance of server installations, by distributing and replicating the service among many servers. Windows supports clustering, with each cluster having its own dedicated storage, or all clusters connected to a common Storage Area Network (SAN), which can be running on Windows as well as non-Windows Operating systems. The SAN may be connected to other computers as well.


Windows Compute Cluster Server
Windows Compute Cluster Server 2003 (CCS), released in June 2006, is designed for high-end applications that require high performance computing clusters. It is designed to be deployed on numerous computers to be clustered together to achieve supercomputing speeds. Each Compute Cluster Server network comprises at least one controlling head node and subordinate processing nodes that carry out most of the work.

Computer Cluster Server uses the Microsoft Messaging Passing Interface v2 (MS-MPI) to communicate between the processing nodes on the cluster network. It ties nodes together with a powerful inter-process communication mechanism which can be complex because of communications between hundreds or even thousands of processors working in parallel.

The application programming interface consists of over 160 functions. A job launcher enables users to execute jobs to be executed in the computing cluster. MS MPI was designed to be compatible with the reference open source MPI2 specification which is widely used in High-performance computing (HPC). With some exceptions because of security considerations, MS MPI covers the complete set of MPI2 functionality as implemented in MPICH2, except for the planned future features of dynamic process spawn and publishing.

Windows Storage Server
Windows Storage Server 2003, a part of the Windows Server 2003 series is a specialized server Operating System for Network Attached Storage (NAS). It is optimized for use in file and print sharing and also in Storage Area Network (SAN) scenarios. It is only available through Original equipment manufacturers (OEMs). Unlike other Windows Server 2003 editions that provide file and printer sharing functionality, Windows Storage Server 2003 does not require any Client access licenses.

Windows Storage Server 2003 NAS equipment can be headless, which means that they are without any monitors, keyboards or mice, and are administered remotely. Such devices are plugged into any existing IP network and the storage capacity is available to all users. Windows Storage Server 2003 can use RAID arrays to provide data redundancy, fault-tolerance and high-performance. Multiple such NAS servers can be clustered to appear as a single device. This allows for very high performance as well as allowing the service to remain up even if one of the servers goes down.

Windows Storage Server 2003 can also be used to create a Storage Area Network, in which the data is transferred in terms of chunks rather than files, thus providing more granularity to the data that can be transferred. This provides higher performance to database and transaction processing applications. Windows Storage Server 2003 also allows NAS devices to be connected to a SAN.

Windows Storage Server 2003 R2, as a follow-up to Windows Storage Server 2003, adds file-server performance optimization, Single Instance Storage (SIS), and index-based search. Single instance storage (SIS) scans storage volumes for duplicate files, and moves the duplicate files to the common SIS store. The file on the volume is replaced with a link to the file. This substitution reduces the amount of storage space required, by as much as 70%.[10]

Windows Storage Server R2 provides an index-based, full-text search based on the indexing engine already built-in Windows server.[10] The updated search engine speeds up indexed searches on network shares. Storage Server R2 also provides filters for searching many standard file formats, such as .zip, AutoCAD, XML, MP3, and .pdf, and all Microsoft Office file formats.

Windows Storage Server 2003 R2 includes built in support for Windows SharePoint Services and Microsoft SharePoint Portal Server, and adds Storage Management snap-in for the Microsoft Management Console. It can be used to centrally manage storage volumes, including DFS shares, on servers running Windows Storage Server R2.

Windows Storage Server R2 can be used as an iSCSI target with standard and enterprise editions of Windows Storage Server R2, incorporating WinTarget iSCSI technology which Microsoft acquired in 2006 by from StringBean software. This will be an add on feature available for purchase through OEM partners as an iSCSI feature pack, or is included in some versions of WSS as configured by OEMs.

NOTE:

penton media


invent media


8cm dvdr


colored cd-r


cheap cdr


agar media


the latest


transfer media


new innovations


uv inks


CDR CDRW


Professional Media


fuji cdr


Electronic Products


cd-r dvd-r


freelance media


cheap macbook


sony dvdr


DVD-R 8X


printable cdr


DVD Disks


blank discs


cd-r disk


Screen Media


sleep innovations


Robosapien Media


Thermochromic Inks


Carbon Inks


Boot Protect


Flex ATX

Windows 2000

Windows 2000 (also referred to as Win2K) is a preemptive, interruptible, graphical and business-oriented operating system designed to work with either uniprocessor or symmetric multi-processor computers. It is part of the Microsoft Windows NT line of operating systems and was released on 17 February 2000.It was succeeded by Windows XP in October 2001 and Windows Server 2003 in April 2003. It is a hybrid kernel operating system.

Four editions of Windows 2000 were released: Professional, Server, Advanced Server, and Datacenter Server.[7] Additionally, Microsoft sold Windows 2000 Advanced Server Limited Edition and Windows 2000 Datacenter Server Limited Edition, which were released in 2001 and run on 64-bit Intel Itanium microprocessors. While each edition of Windows 2000 was targeted to a different market, they share a core set of features, including many system utilities such as the Microsoft Management Console and standard system administration applications. Support for people with disabilities has been improved over Windows NT 4.0 with a number of new assistive technologies,[9] and Microsoft increased support for different languages[10] and locale information.[11] All versions of the operating system support the Windows NT filesystem, NTFS 3.0,[12] the Encrypting File System, as well as basic and dynamic disk storage.[13] The Windows 2000 Server family has additional features,[14] including the ability to provide Active Directory services (a hierarchical framework of resources), Distributed File System (a file system that supports sharing of files) and fault-redundant storage volumes. Windows 2000 can be installed through either a manual or unattended installation.[15] Unattended installations rely on the use of answer files to fill in installation information, and can be performed through a bootable CD using Microsoft Systems Management Server, by the System Preparation Tool.

Microsoft marketed Windows 2000 as the most secure Windows version ever,[17] but it became the target of a number of high-profile virus attacks such as Code Red and Nimda.[18] More than eight years after its release, it continues to receive patches for security vulnerabilities nearly every month.

History
Windows 2000 is a continuation of the Microsoft Windows NT family of operating systems, replacing Windows NT 4.0. Originally called Windows NT 5.0, then Windows NT 2000, Microsoft changed the name to Windows 2000 on 27 October 1998.[19] It is also the first Windows version that has been released without a code name, though Windows 2000 Service Pack 1 was codenamed "Asteroid"[20] and Windows 2000 64-bit was codenamed "Janus"[21] (not to be confused with Windows 3.1, which had the same codename). The first beta for Windows 2000 was released in September 1997[22] and several further betas followed until Beta 3 which was released on 29 April 1999.[22] During the development, there was a DEC Alpha build of Windows 2000 but it was abandoned with the second beta.[22] From here, Microsoft issued three release candidates between July and November 1999, and finally released the operating system to partners on 12 December 1999.[23] The public could buy the full version of Windows 2000 on 17 February 2000. Three days before this event, which Microsoft advertised as "a standard in reliability", a leaked memo from Microsoft reported on by Mary Jo Foley revealed that Windows 2000 had "over 63,000 potential known defects".[24] After Foley's article was published, Microsoft blacklisted her for a considerable time:[25] InformationWeek summarized the release "our tests show the successor to NT 4.0 is everything we hoped it would be. Of course, it isn't perfect either."[26] Wired News later described the results of the February launch as "lackluster".[27] Novell criticized Microsoft's Active Directory, the new directory service architecture as less scalable or reliable than its own Novell Directory Services (NDS) alternative.

Windows 2000 was first planned to replace both Windows 98 and Windows NT 4.0. However, that changed later. Instead, an updated version of Windows 98 called Windows 98 Second Edition was released in 1999.[22] Close to the release of Windows 2000 Service Pack 1, Microsoft released Windows 2000 Datacenter Server, targeted at large-scale computing systems with support for 32 processors, on 29 September 2000.

On or shortly before 12 February 2004, "portions of the Microsoft Windows 2000 and Windows NT 4.0 source code were illegally made available on the Internet".[29] The source of the leak remains unannounced. Microsoft issued the following statement:Microsoft source code is both copyrighted and protected as a trade secret. As such, it is illegal to post it, make it available to others, download it or use it.

Despite the warnings, the archive containing the leaked code spread widely on the file-sharing networks. On 16 February 2004, an exploit "allegedly discovered by an individual studying the leaked source code"[29] for certain versions of Microsoft Internet Explorer was reported.


Architecture
The Windows 2000 operating system architecture consists of two layers (user mode and kernel mode) , with many different modules within both.
See also: Architecture of Windows NT
Windows 2000 is a highly modular system that consists of two main layers: a user mode and a kernel mode.[30] The user mode refers to the mode in which user programs are run. Such programs only have access to certain system resources, while the kernel mode has unrestricted access to the system memory and external devices. All user mode applications access system resources through the Executive which runs in kernel mode.[31]


User mode
User mode in Windows 2000 is made of subsystems capable of passing I/O requests to the appropriate kernel mode drivers by using the I/O manager. Two subsystems make up the user mode layer of Windows 2000: the environment subsystem and the integral subsystem.[32]

The environment subsystem is designed to run applications written for many different types of operating systems. These applications, however, run at a lower priority than kernel mode processes.


Common features
Windows 2000 introduced many of the new features of Windows 98 and Windows 98 SE into the NT line,[43] such as the Windows Desktop Update,[43] Internet Explorer 5,[43] Outlook Express, NetMeeting, FAT32 support,[44] Windows Driver Model,[45] Internet Connection Sharing,[43] Windows Media Player, WebDAV support[46] etc. Certain new features are common across all editions of Windows 2000, among them NTFS 3.0,[12] the Microsoft Management Console (MMC),[47] UDF support, the Encrypting File System (EFS),[48] Logical Disk Manager,[49] Image Color Management 2.0, [50] support for PostScript 3-based printers, [50] OpenType (.OTF) and Type 1 PostScript (.PFB) font support, [50] the Data protection API (DPAPI),[51] an LDAP/Active Directory-enabled Address Book,[52] usability enhancements and multi-language and locale support. Windows 2000 also comes with several system utilities. Microsoft also introduced a new feature to protect critical system files, called Windows File Protection. This protects critical Windows system files by preventing programs other than Microsoft's operating system update mechanisms such as the Package Installer, Windows Installer and other update components from modifying them.[53]

Microsoft recognized that a serious error or a stop error could cause problems for servers that needed to be constantly running and so provided a system setting that would allow the server to automatically reboot when a stop error occurred.[54] Also included is an option to dump any of the first 64 KB of memory to disk (the smallest amount of memory that is useful for debugging purposes, also known as a minidump), a dump of only the kernel's memory, or a dump of the entire contents of memory to disk, as well as write that this event happened to the Windows 2000 event log.[54] In order to improve performance on servers running Windows 2000, Microsoft gave administrators the choice of optimizing the operating system's memory and processor usage patterns for background services or for applications.[55] Windows 2000 also introduced core system administration and management features as the Windows Installer,[56] Windows Management Instrumentation[57] and Event Tracing for Windows (ETW)[58] into the operating system.


Improvements to Windows Explorer
The integrated media player in Windows Explorer playing a MIDI sequence.
Windows Explorer has been enhanced in several ways in Windows 2000. It is the first Windows NT release to include Active Desktop, first introduced as a part of Internet Explorer 4.0 (specifically Windows Desktop Update), and only pre-installed in Windows 98 by that time.[59] It allowed users to customize the way folders look and behave by using HTML templates, having the file extension HTT. This feature was abused by computer viruses that employed malicious scripts, Java applets, or ActiveX controls in folder template files as their infection vector. Two such viruses are VBS/Roor-C[60] and VBS.Redlof.a.[61] The "Web-style" folders view, with the left Explorer pane displaying details for the object currently selected, is turned on by default in Windows 2000. For certain file types, such as pictures and media files, the preview is also displayed in the left pane.[62] Until the dedicated interactive preview pane appeared in Windows Vista, Windows 2000 had been the only Windows release to feature an interactive media player as the previewer for sound and video files. However, such a previewer can be enabled in Windows Me and Windows XP through the use of third-party shell extensions, as the updated Windows Explorer allows for custom thumbnail previewers and tooltip handlers. The default file tooltip displays file title, author, subject and comments;[63] this metadata may be read from a special NTFS stream, if the file is on an NTFS volume, or from an OLE structured storage stream, if the file is a structured storage document. All Microsoft Office documents since Office 95[64] make use of structured storage, so their metadata is displayable in the Windows 2000 Explorer default tooltip. File shortcuts can also store comments which are displayed as a tooltip when the mouse hovers over the shortcut.

The right pane of Windows 2000 Explorer, which usually just lists files and folders, can also be customized. For example, the contents of the system folders aren't displayed by default, instead showing in the right pane a warning to the user that modifying the contents of the system folders could harm their computer. It's possible to define additional Explorer panes by using DIV elements in folder template files[59] Other Explorer UI elements that can be customized include columns in "Details" view, icon overlays, and search providers: the new DHTML-based search pane is integrated into Windows 2000 Explorer, unlike the separate search dialog found in all previous Explorer versions. This degree of customizability is new to Windows 2000; neither Windows 98 nor the Desktop Update could provide it.[65] The Indexing Service has also been integrated into the operating system and the search pane built into Explorer allows searching files indexed by its database.


NTFS 3.0

Windows 2000 supports disk quotas, which can be set via the "Quota" tab found in the hard disk properties dialog box.
Microsoft released the version 3.0 of NTFS[12] (sometimes incorrectly called NTFS 5 in relation to the kernel version number) as part of Windows 2000; this introduced disk quotas, file-system-level encryption, sparse files and reparse points. Sparse files allow for the efficient storage of data sets that are very large yet contain many areas that only have zeros.[67] Reparse points allow the object manager to reset a file namespace lookup and let file system drivers implement changed functionality in a transparent manner.[68] Reparse points are used to implement volume mount points, junctions, Hierarchical Storage Management, Native Structured Storage and Single Instance Storage.[68] Volume mount points and directory junctions allow for a file to be transparently referred from one file or directory location to another.


Main article: Encrypting File System
The Encrypting File System (EFS) introduced strong file system-level encryption to Windows. It allows any folder or drive on an NTFS volume to be encrypted transparently by the user.[48] EFS works together with the EFS service, Microsoft's CryptoAPI and the EFS File System Runtime Library (FSRTL).[69] To date, its encryption has not been compromised.

EFS works by encrypting a file with a bulk symmetric key (also known as the File Encryption Key, or FEK), which is used because it takes less time to encrypt and decrypt large amounts of data than if an asymmetric key cipher were used.[69] The symmetric key used to encrypt the file is then encrypted with a public key associated with the user who encrypted the file, and this encrypted data is stored in the header of the encrypted file. To decrypt the file, the file system uses the private key of the user to decrypt the symmetric key stored in the file header. It then uses the symmetric key to decrypt the file. Because this is done at the file system level, it is transparent to the user.[70]

For a user losing access to their key, support for recovery agents that can decrypt files is built in to EFS. A Recovery Agent is a user who is authorized by a public key recovery certificate to decrypt files belonging to other users using a special private key. By default, local administrators are recovery agents however they can be customized using Group Policy.Games

Windows 2000 included version 7.0 of the DirectX API, commonly used by game developers on Windows 98.[74] The last version of DirectX that Windows 2000 supports is DirectX 9.0c (Shader Model 3.0), that shipped with Windows XP Service Pack 2. Currently, Microsoft publishes quarterly updates to DirectX 9.0c; these updates contain bug fixes to the core runtime and some additional libraries such as D3DX, XAudio 2, XInput and Managed DirectX components. The majority of games written for recent versions of DirectX can therefore run on Windows 2000.


System utilities
Windows 2000 introduced the Microsoft Management Console (MMC), which is used to create, save, and open administrative tools.[47] Each of these is called a console, and most allow an administrator to administer other Windows 2000 computers from one centralised computer. Each console can contain one or many specific administrative tools, called snap-ins.[47] These can be either standalone (with one function), or an extension (adding functions to an existing snap-in). In order to provide the ability to control what snap-ins can be seen in a console, the MMC allows consoles to be created in author mode or user mode.[47] Author mode allows snap-ins to be added, new windows to be created, all portions of the console tree to be displayed and consoles to be saved. User mode allows consoles to be distributed with restrictions applied. User mode consoles can grant full access to the user for any change, or they can grant limited access, preventing users from adding snapins to the console though they can view multiple windows in a console. Alternatively users can be granted limited access, preventing them from adding to the console and stopping them from viewing multiple windows in a single console.[75]

The Windows 2000 Computer Management console can perform many system tasks. It is pictured here starting a disk defragmentation.
The main tools that come with Windows 2000 can be found in the Computer Management console (in Administrative Tools in the Control Panel).[76] This contains the Event Viewer[77] — a means of seeing events and the Windows equivalent of a log file, a system information utility, a backup utility, Task Scheduler and management consoles to view open shared folders and shared folder sessions, configure and manage COM+ applications, configure Group Policy[78] , manage all the local users and user groups, and a device manager.[79] It contains Disk Management and Removable Storage[80] snap-ins, a disk defragmenter as well as a performance diagnostic console, which displays graphs of system performance and configures data logs and alerts. It also contains a service configuration console, which allows users to view all installed services and to stop and start them, as well as configure what those services should do when the computer starts.

Windows 2000 comes with two utilities to edit the Windows registry, REGEDIT.EXE and REGEDT32.EXE.[81] REGEDIT has been directly ported from Windows 98, and therefore does not support editing registry permissions.[81] REGEDT32 has the older multiple document interface (MDI) and can edit registry permissions in the same manner that Windows NT's REGEDT32 program could. REGEDIT has a left-side tree view of the Windows registry, lists all loaded hives and represents the three components of a value (its name, type, and data) as separate columns of a table. REGEDT32 has a left-side tree view, but each hive has its own window, so the tree displays only keys and it represents values as a list of strings. REGEDIT supports right-clicking of entries in a tree view to adjust properties and other settings. REGEDT32 requires all actions to be performed from the top menu bar. Windows XP is the first system to integrate these two programs into a single utility, adopting the REGEDIT behavior with the additional NT features.[81]

The System File Checker (SFC) also comes with Windows 2000. It is a command line utility that scans system files and verifies whether they were signed by Microsoft and works in conjunction with the Windows File Protection mechanism. It can also repopulate and repair all the files in the Dllcache folder.[82]


Recovery Console
The Recovery Console is usually used to recover unbootable systems.
The Recovery Console is run from outside the installed copy of Windows to perform maintenance tasks that can neither be run from within it nor feasibly be run from another computer or copy of Windows 2000.[83] It is usually used to recover the system from problems that cause booting to fail, which would render other tools useless.

It has a simple command line interface, used to check and repair the hard drive(s) , repair boot information (including NTLDR) , replace corrupted system files with fresh copies from the CD, or enable/disable services and drivers for the next boot.


Distributed File System
The Distributed File System (DFS) allows shares in multiple different locations to be logically grouped under one folder, or DFS root. When users try to access a network share off the DFS root, the user is really looking at a DFS link and the DFS server transparently redirects them to the correct file server and share. A DFS root can only exist on a Windows 2000 version that is part of the server family, and only one DFS root can exist on that server.

There can be two ways of implementing a DFS namespace on Windows 2000: either through a standalone DFS root or a domain-based DFS root. Standalone DFS allows for only DFS roots on the local computer, and thus does not use Active Directory. Domain-based DFS roots exist within Active Directory and can have their information distributed to other domain controllers within the domain — this provides fault tolerance to DFS. DFS roots that exist on a domain must be hosted on a domain controller or on a domain member server. The file and root information is replicated via the Microsoft File Replication Service (FRS).


Active Directory
A new way of organizing Windows network domains, or groups of resources, called Active Directory, is introduced with Windows 2000 to replace Windows NT's earlier domain model. Active Directory's hierarchical nature allowed administrators a built-in way to manage user and computer policies and user accounts, and to automatically deploy programs and updates with a greater degree of scalability and centralization than provided in previous Windows versions. It is one of the main reasons many corporations migrated to Windows 2000.[citation needed] User information stored in Active Directory also provided a convenient phone book-like function to end users. Active Directory domains can vary from small installations with a few hundred objects, to large installations with millions. Active Directory can organise and link groups of domains into a contiguous domain name space to form trees. Groups of trees outside of the same namespace can be linked together to form forests.

Active Directory services could only be installed on a Windows 2000 Server, Advanced Server, or Datacenter Server computer, and cannot be installed on a Windows 2000 Professional computer. However, Windows 2000 Professional is the first client operating system able to exploit Active Directory's new features. As part of an organization's migration, Windows NT clients continued to function until all clients were upgraded to Windows 2000 Professional, at which point the Active Directory domain could be switched to native mode and maximum functionality achieved.

Active Directory requires a DNS server that supports SRV resource records, or that an organization's existing DNS infrastructure be upgraded to support this. There must be one or more domain controllers to hold the Active Directory database and provide Active Directory directory services.


Volume fault tolerance
Along with support for simple, spanned and striped volumes, the server family of Windows 2000 also supports fault-tolerant volume types. The types supported are mirrored volumes and RAID-5 volumes:

Mirrored volumes: the volume contains several disks, and when data is written to one it is also written to the other disks. This means that if one disk fails, the data can be totally recovered from the other disk. Mirrored volumes are also known as RAID-1.
RAID-5 volumes: a RAID-5 volume consists of multiple disks, and it uses block-level striping with parity data distributed across all member disks. Should a disk fail in the array, the parity blocks from the surviving disks are combined mathematically with the data blocks from the surviving disks to reconstruct the data on the failed drive "on-the-fly".

Deployment
Windows 2000 can be deployed to a site via various methods. It can be installed onto servers via traditional media (such as CD) or via distribution folders that reside on a shared folder. Installations can be attended or unattended. During a manual installation, the administrator must specify configuration options. Unattended installations are scripted via an answer file, or a predefined script in the form of an INI file that has all the options filled in. An answer file can be created manually or using the graphical Setup manager. The Winnt.exe or Winnt32.exe program then uses that answer file to automate the installation. Unattended installations can be performed via a bootable CD, using Microsoft Systems Management Server (SMS) , via the System Preparation Tool (Sysprep), via the Winnt32.exe program using the /syspart switch or via Remote Installation Services (RIS). The ability to slipstream a service pack into the original operating system setup files is also introduced in Windows 2000.[87]

The Sysprep method is started on a standardized reference computer — though the hardware need not be similar — and it copies the required installation files from the reference computer to the target computers. The hard drive does not need to be in the target computer and may be swapped out to it at any time, with the hardware configured later. The Winnt.exe program must also be passed a /unattend switch that points to a valid answer file and a /s file that points to one or more valid installation sources.

Sysprep allows the duplication of a disk image on an existing Windows 2000 Server installation to multiple servers. This means that all applications and system configuration settings will be copied across to the new installations, and thus, the reference and target computers must have the same HALs, ACPI support, and mass storage devices — though Windows 2000 automatically detects Plug and Play devices. The primary reason for using Sysprep is to quickly deploy Windows 2000 to a site that has multiple computers with standard hardware. (If a system had different HALs, mass storage devices or ACPI support, then multiple images would need to be maintained.)

Systems Management Server can be used to upgrade multiple computers to Windows 2000. These must be running Windows NT 3.51, Windows NT 4.0, Windows 98 or Windows 95 OSR2.x along with the SMS client agent that can receive software installation operations. Using SMS allows installations over a wide area and provides centralised control over upgrades to systems.

Remote Installation Services (RIS) are a means to automatically install Windows 2000 Professional (and not Windows 2000 Server) to a local computer over a network from a central server. Images do not have to support specific hardware configurations and the security settings can be configured after the computer reboots as the service generates a new unique security ID (SID) for the machine. This is required so that local accounts are given the right identifier and do not clash with other Windows 2000 Professional computers on a network.[88] RIS requires that client computers are able to boot over the network via either a network interface card that has a Pre-Boot Execution Environment (PXE) boot ROM installed or that the client computer has a network card installed that is supported by the remote boot disk generator. The remote computer must also meet the Net PC specification. The server that RIS runs on must be Windows 2000 Server and it must be able to access a network DNS Service, a DHCP service and the Active Directory services.[89]


Editions
Windows 2000 Professional cover
Microsoft released various editions of Windows 2000 for different markets and business needs: Professional, Server, Advanced Server and Datacenter Server. Each was packaged separately.

Windows 2000 Professional was designed as the desktop operating system for businesses and power users. It is the client version of Windows 2000. It offers greater security and stability than many of the previous Windows desktop operating systems. It supports up to two processors, and can address up to 4 GB of RAM. The system requirements are a Pentium processor of 133 MHz or greater, at least 32 MB of RAM, 650 MB of hard drive space, and a CD-ROM drive (recommended: Pentium II, 128 MB of RAM, 2 GB of hard drive space, and CD-ROM drive).[90]

Windows 2000 Server SKUs share the same user interface with Windows 2000 Professional, but contain additional components for the computer to perform server roles and run infrastructure and application software. A significant new component introduced in the server SKUs is Active Directory, which is an enterprise-wide directory service based on LDAP. Additionally, Microsoft integrated Kerberos network authentication, replacing the often-criticised NTLM authentication system used in previous versions. This also provided a purely transitive-trust relationship between Windows 2000 domains in a forest (a collection of one or more Windows 2000 domains that share a common schema, configuration, and global catalog, being linked with two-way transitive trusts). Furthermore, Windows 2000 introduced a Domain Name Server which allows dynamic registration of IP addresses. Windows 2000 Server requires 128 MB of RAM and 1 GB hard disk space, however requirements may be higher depending on installed components.[90]

Windows 2000 Advanced Server is a variant of Windows 2000 Server operating system designed for medium-to-large businesses. It offers clustering infrastructure for high availability and scalability of applications and services, including main memory support of up to 8 gigabytes (GB) on Physical Address Extension (PAE) systems and the ability to do 8-way SMP. It supports TCP/IP load balancing and enhanced two-node server clusters based on the Microsoft Cluster Server (MSCS) in Windows NT Server 4.0 Enterprise Edition.[91] Limited number of copies of an IA-64 version, called Windows 2000 Advanced Server, Limited Edition were made available via OEMs. System requirements are similar to those of Windows 2000 Server [90], however they may need to be higher to scale to larger infrastructure.

Windows 2000 Datacenter Server is a variant of Windows 2000 Server designed for large businesses that move large quantities of confidential or sensitive data frequently via a central server. [92] Like Advanced Server, it supports clustering, failover and load balancing. Limited number of copies of an IA-64 version, called Windows 2000 Datacenter Server, Limited Edition were made available via OEMs. Its minimum system requirements are normal, but it was designed to be capable of handing advanced, fault-tolerant and scalable hardware—for instance computers with up to 32 CPUs and 64 GBs RAM, with rigorous system testing and qualification, hardware partitioning, coordinated maintenance and change control.


Total cost of ownership
In October 2002, Microsoft commissioned IDC to determine the total cost of ownership (TCO) for enterprise applications on Windows 2000 versus the TCO of the same applications on Linux. IDC's report is based on telephone interviews of IT executives and managers of 104 North American companies in which they determined what they were using for a specific workload for file, print, security and networking services. IDC determined that the four areas where Windows 2000 had a better TCO than Linux — over a period of five years for an average organization of 100 employees — were file, print, network infrastructure and security infrastructure. They determined, however, that Linux had a better TCO than Windows 2000 for web serving. The report also found that the greatest cost was not in the procurement of software and hardware, but in staffing costs and downtime. While the report applied a 40% productivity factor during IT infrastructure downtime, recognizing that employees are not entirely unproductive, it did not consider the impact of downtime on the profitability of the business. The report stated that Linux servers had less unplanned downtime than Windows 2000 servers. It found that most Linux servers ran less workload per server than Windows 2000 servers and also that none of the businesses interviewed used 4-way SMP Linux computers. The report also did not take into account specific application servers — servers that need low maintenance and are provided by a specific vendor. The report did emphasize that TCO was only one factor in considering whether to use a particular IT platform, and also noted that as management and server software improved and became better packaged the overall picture shown could change.[93]


Current status
Windows 2000 has now been superseded by newer Microsoft operating systems: Windows 2000 Server products by Windows Server 2003, and Windows 2000 Professional by Windows XP Professional.

The Windows 2000 family of operating systems moved from mainstream support to the extended support phase on 30 June 2005. Microsoft says that this marks the progression of Windows 2000 through the Windows lifecycle policy. Under mainstream support, Microsoft freely provides design changes if any, service packs and non-security related updates in addition to security updates, whereas in extended support, service packs are not provided and non-security updates require contacting the support personnel by e-mail or phone. Under the extended support phase, Microsoft continues to provide critical security updates every month for all components of Windows 2000 (including Internet Explorer 5.01 SP4) and paid per-incident support for technical issues. Because of Windows 2000's age, Microsoft is not offering current components such as Internet Explorer 7 for it. They claim that IE 7 relies on security features designed only for Windows XP Service Pack 2 and Windows Vista, and thus porting to the Windows 2000 platform would be non-trivial.[94] Microsoft is strongly advising all users still running Windows 2000 Professional and Server to consider upgrading their operating systems to current operating systems for increased security. While users of Windows 2000 are eligible to receive the upgrade license for Windows Vista or Windows Server 2008, neither of these operating systems can directly perform an upgrade installation from Windows 2000; a clean installation must be performed on computers running Windows 2000.

All Windows 2000 support including security updates and security-related hotfixes will be terminated on 13 July 2010.

Windows 2000 has received four full service packs and one rollup update package following SP4, which is the last service pack. These were: SP1 on 15 August 2000, SP2 on 16 May 2001, SP3 on 29 August 2002 and SP4 on 26 June 2003. Microsoft phased out all development of its Java Virtual Machine (JVM) from Windows 2000 in SP3. Internet Explorer 5.01 has also been upgraded to the corresponding service pack level.

Many Windows 2000 users were hoping for a fifth service pack, but Microsoft cancelled this project early in its development, and instead released Update Rollup 1 for SP4, a collection of all the security-related hotfixes and some other significant issues. The Update Rollup, however, does not include all non-security related hotfixes and is not subjected to the same extensive regression testing as a full service pack. Microsoft states that this update will meet customers' needs better than a whole new service pack, and will still help Windows 2000 customers secure their PCs, reduce support costs, and support existing computer hardware.

Although Windows 2000 is the last NT-based version of Microsoft Windows which does not include Windows Product Activation, Microsoft has introduced Windows Genuine Advantage for certain downloads and non-critical updates from the Download Center for Windows 2000.


Security criticisms
A number of potential security issues have been noted in Windows 2000. A common complaint is that "by default, Windows 2000 installations contain numerous potential security problems. Many unneeded services are installed and enabled, and there is no active local security policy". In addition to insecure defaults, according to the SANS Institute, the most common flaws discovered are remotely exploitable buffer overflow vulnerabilities. Other criticized flaws include the use of vulnerable encryption techniques.

Computer worms first became publicized when Windows 2000 was the dominant server operating system. Code Red and Code Red II were famous (and much discussed) worms that exploited vulnerabilities of the Windows Indexing Service of Windows 2000's Internet Information Services (IIS). In August 2003, two major worms called Sobig and Blaster began to attack millions of Microsoft Windows computers, resulting in the largest downtime and clean-up cost to that date. The 2005 Zotob worm was blamed for security compromises on Windows 2000 machines at the U.S. Department of Homeland Security, the New York Times Company, ABC and CNN.

NOTE:

lexmark consumables


signage media


flexographic inks


cutting media


calligraphy inks


casting media


PC Consumables


dvd-r 4x


Clam Shells


lg dvdr


Video Media


Hand Innovations


Power Surge


Printable CD


Recordable CD


Diskette Floppy


buy invention


best cdr


recordable cd-r


ink jets


photo mousepad


blank cdrs


mini dvd-rw


compact discs


cd-rom disk


millennium bug


sony dsc-t1


dancing light


cd-r blank


dvd-r disk

2008年10月28日星期二

Water cooling

Water cooling is a method of heat removal from components. As opposed to air cooling, water is used as the heat transmitter. Water cooling is commonly used for cooling internal combustion engines in automobiles and large electrical generators. Other uses include cooling the lubricant oil of pumps; for cooling purposes in heat exchangers; cooling products from tanks or columns, and recently, cooling of various major components inside top-end personal computers. The main mechanism for water cooling is convective heat transfer.
Advantages
The advantages of using water cooling over air cooling include water's higher specific heat capacity, density, and thermal conductivity. This allows water to transmit heat over greater distances with much less volumetric flow and reduced temperature difference. For cooling CPU cores, this is its primary advantage: the tremendously increased ability to transport heat away from source to a secondary cooling surface allows for large, more optimally designed radiators rather than small, inefficient fins mounted on or near a heat source such as a CPU core. The "water jacket" around an engine is also very effective at deadening mechanical noises, which makes the engine quieter.
Open method
An open water cooling system makes use of evaporative cooling, lowering the temperature of the remaining (unevaporated) water. A component such as a bong cooler replaces the radiator of a closed water cooling system. The obvious downside of this method is the need to continually replace the water lost due to evaporation.
Automotive usage
The use of water cooling carries the risk of damage from freezing. Automotive and many other engine cooling applications require the use of a water and antifreeze mixture to lower the freezing point to a temperature unlikely to be experienced. Antifreeze also inhibits corrosion from dissimilar metals and can increase the boiling point, allowing a wider range of water cooling temperatures. Its distinctive odor also alerts operators to cooling system leaks and problems that would go unnoticed in a water-only cooling system. The heated water can also be used to warm the air conditioning system inside the car, if so desired.
Other less common chemical additives are products to reduce surface tension. These additives are meant to increase the efficiency of automotive cooling systems. Such products are used to enhance the cooling of underperforming or undersized cooling systems or in racing where the weight of a larger cooling system could be a disadvantage.
Computer usage

Interior of a water cooled computer, showing CPU/GPU waterblock, tubing and pump.

DIY Watercooling setup showing Laing Thermotech D4 12v pump, Swiftech STORM CPU Waterblock and the typical application of a T-Line.
In the past few years, water cooling is being realized for cooling computer components, especially the CPU. Water cooling usually consists of a CPU water block, a water pump, and a heat exchanger (usually a radiator with a fan attached). Water cooling not only allows for quieter operation and improved overclocking, but with improved heat handling capabilities hotter processors can be supported. Less commonly, GPUs, Northbridges, hard drives, memory, VRM, and even power supplies are also water cooled.
Water coolers for computers (other than mainframes) were, up until the end of the '90s, homemade. They were put together using car radiators (or more commonly, a car's heater core), aquarium pumps and home made water blocks. In conjunction with these automotive items users would pair laboratory-grade PVC and Silicone tubing and various reservoirs (home made using plastic bottles, or constructed using cylindrical acrylic or sheets of acrylic, usually clear) and or a T-Line. More recently a growing number of companies are manufacturing pre-made, specialised components, allowing water cooling to be compact enough to fit inside a computer case. This, coupled with the growing amount of heat coming from the CPU has greatly increased the popularity of water cooling. However it is still a very niche market.
Dedicated overclockers will occasionally use vapor-compression refrigeration or thermoelectric coolers in place of more common standard heat exchangers. Water cooling systems in which water is cooled directly by the evaporator coil of a phase change system are able to chill the circulating coolant below the ambient air temperature (an impossible feat using a standard heat exchanger) and, as a result, generally provide superior cooling of the computer's heat-generating components. The downside of phase-change or thermoelectric cooling is that it uses much more electricity and antifreeze must be added due to the low temperature. Additionally, insulation, usually in the form of lagging around water pipes and neoprene pads around the components to be cooled, must be used in order to prevent damage caused by condensation of water vapour from the air on the surfaces at below ambient temperature. Common places from which to borrow the required phase change systems are a household dehumidifier or air conditioner.
An alternative cooling system, which enables components to be cooled below the ambient temperature, but which obviates the requirement for antifreeze and lagged pipes, is to place a thermoelectric device (commonly referred to as a 'Peltier junction' or 'pelt' after Jean Peltier, who documented the effect) between the heat-generating component and the water block. Because the only sub-ambient temperature zone now is at the interface with the heat-generating component itself, insulation is required only in that localized area. The disadvantage to such a system is that pelts typically draw a large amount of power, and the water cooling system is required to remove this power, in addition to the heat generated by the component. Another possible danger is condensation, resulting from the ambient air right around the pelt being cold. This condensation could cause a short-circuit, shutting the computer down or possibly permanent damage. A proper installation requires that the Peltier be "potted" with silicone epoxy. The epoxy is applied around the edges of the device, preventing air from entering or leaving the interior.
Apple's Power Mac G5 was the first mainstream desktop computer to have water cooling as standard, and Dell later followed suit by shipping their XPS computers with liquid cooling, using thermoelectric cooling to help cool the liquid.

A Marley mechanical induced draft cooling tower.
Industrial usage
Most industrial cooling towers use river water or well water as their source of fresh cooling water. The large mechanical induced-draft or forced-draft cooling towers in industrial plants such as power stations, petroleum oil refineries, petrochemical plants and natural gas processing plants continuously circulate cooling water through heat exchangers and other equipment where the water absorbs heat. That heat is then rejected to the atmosphere by the partial evaporation of the water in cooling towers where upflowing air is contacted with the circulating downflow of water. The loss of evaporated water into the air exhausted to the atmosphere is replaced by "make-up" fresh river water or fresh cooling water. Since the evaporation of pure water is replaced by make-up water containing carbonates and other dissolved salts, a portion of the circulating water is also continuously discarded as "blowdown" water to prevent the excessive build-up of salts in the circulating water.[1]

Cooling water intake of a nuclear power plant.
On very large rivers, but more often at coastal and estuarine sites, "direct cooled" systems are often used instead. These industrial plants do not use cooling towers and the atmosphere as a heat sink but put the waste heat to the river or coastal water instead. These "once-through" systems thus rely upon a good supply of river water or sea water for their cooling needs; the warmed water is returned directly to the aquatic environment. Thermal pollution of rivers, estuaries and coastal waters is an issue which needs to be addressed when considering the siting of such plants. Other impacts include "impingement" (the capture of larger organisms such as fish and shrimp on screens protecting the small bore tubes of the heat exchangers from blockage) and "entrainment" (the combined effects of temperature, pressure, biocide residual and turbulence/shear on smaller organisms entrained with the cooling water and then expelled back to the aquatic environment in the effluent). The cooling water in such heat exchange cycles is often treated with a biocide to prevent fouling in heat exchangers like condensers and other equipment, but in some instances such control can be exercised instead through frequent cleaning, antifouling paints (both toxic-release and non-toxic), or heat treatment.
High grade industrial water (produced by reverse osmosis) and potable water is sometimes used in industrial plants requiring high-purity cooling water.
Some nuclear reactors use heavy water as cooling. Heavy water is employed in nuclear reactors because it is a weaker moderator of the nuclear chain reaction. This allows for the reactor core size to be smaller, or for the use of less enriched fuel. For the main cooling system, normal water is preferably employed through the use of a heat exchanger as heavy water is much more expensive. Reactors that use other materials for moderation (graphite) may also use normal water for cooling.
NOTE:

sony cybershot software


interior design software


lpi linux certification


strength deployment inventory


rogue community college


air tram airlines


free antivirus software


air tran airlines


mira costa college


church membership software


thayer birding software


grants for college


Wedding Invitation Templates


photo stitching software


school administration software


proactive acne solution


facility management software


evergreen state college


pima community college


pell grant application


xp oem software


canadian passport application


PHD Distance Programs


Young Girls Modeling


Bristol Community College


sex change operation


edi translation software


Burger King Application


learning styles inventory


worcester state college

Water block

A water block is the watercooling equivalent of a heatsink. It can be used on many different computer components including the central processing unit (CPU), GPU, and Northbridge chipset on the motherboard. It consists of at least two main parts, the "base", which is the part that makes contact with the device being cooled, usually made of a high conductivity metal such as aluminum or copper and in some cases silver as is found in many newer blocks. The second part, the "top" ensures the water is contained safely inside the "block" and has connections that allow hosing to connect it with the water cooling loop. The top can be made of the same metal as the base, transparent Perspex, Delrin, Nylon, or HDPE. Most newer high end water blocks also contain mid-plates which serve to add jet tubes, nozzles, and other flow altering devices.
The base, top, and mid-plate(s) are sealed together to form a "block" with some sort of path for water to flow through. The ends of the path have inlet/outlet connectors for the tubing that connects it to the rest of the watercooling system. Early designs included spiral, zig-zag pattern or heatsink like fins to allow the largest possible surface area for heat to transfer from the device being cooled to the water. These designs generally were used because the conjecture was that maximum flow was required for high performance. Trial and error and the evolution of water block design has shown that trading flow for turbulence can often improve performance. The Storm series of water blocks is an example of this. Its jet tube mid plate and cupped base design makes it more restrictive to the flow of water than early maze designs but the increased turbulence results in a large increase in performance. Newer designs include "pin" style blocks, "jet cup" blocks, further refined maze designs, micro-fin designs, and variations on these designs. Increasingly restrictive designs have only been possible because of increases in maximum head pressure of commercially viable water pumps.
A water block is better at dissipating heat than an aircooled heatsink due to water's higher specific heat capacity and thermal conductivity. The water is usually pumped through to a radiator which allows a fan pushing air through it to take the heat created from the device and expel it into the air. A radiator is more efficient than a standard CPU or GPU heatsink/air cooler at removing heat because it has a much larger surface area.
Installation of a water block is also similar to that of a heatsink, with a thermal pad or thermal grease placed between it and the device being cooled to aid in heat conduction.

NOTE:

Gentian Violet Solution


Bypass Proxy Server


Audio Recording Software


Car PC Software


Weight Loss Programs


Compare Accounting Software


Graphic Equalizer Software


Nokia Unlocking Software


Free Mobile Software


Cue Cat Software


kellogg community college


oscommerce free templates


sacramento city college


usa 3000 airlines


tax preparation software


royal brunei airlines


sodium chloride solution


bar coding software


3D Animation Programs


samsung a900 software


louisiana technical college


free oscommerce templates


erie community college


kwang tze solution


det norske veritas


lorain community college


albertus magnus college


essex county college


321 teen chat


ms wbt server

Underclocking

Underclocking also known as downclocking is the practice of modifying a synchronous circuit's speed settings to run at a lower clock speed than the manufacturer's specification. Underclocking is the opposite of overclocking.
Microprocessor underclocking
For microprocessors, the purpose is generally to decrease the need for heat dissipation devices or decrease the electrical power consumption. This can provide increased system stability in high-heat environments, or can allow a system to run with a lower airflow (and therefore quieter) cooling fan or without one at all. For example, a Pentium 4 processor clocked at 2.4 GHz can be underclocked to 1.8 GHz and can then be safely run with reduced fan speeds. However, this invariably comes at the expense of some system performance. Underclocking is also sometimes used to monitor closely a process for its behavior at lower speed where a fast speed running of the process does not allow that; it allows a programmer or technician to troubleshoot an application that is running abnormally quickly.[dubious – discuss] Underclocking can also be performed on graphics card processors (GPUs), usually with the aim of reducing heat output. For instance, it is possible to set a GPU to run at lower clock speeds when performing everyday tasks (e.g. internet browsing), thus allowing the card to operate at lower temperature and thus lower, quieter fan speeds. The GPU can then be overclocked for more graphically intense applications, such as games. Underclocking a GPU will reduce performance, but this decrease will probably not be noticeable except in graphically intensive applications.
Memory underclocking
Newer and faster RAM may be underclocked to match older systems as an inexpensive way to replace rare or discontinued memory. This might also be necessary if stability problems are encountered at higher settings.
When used
Dynamic frequency scaling (automatic underclocking) is very common on laptop computers and is beginning to emerge on desktop computers as well. In laptops, the processor is usually underclocked automatically whenever the computer is operating on batteries. Most newer notebook and some desktop processors (see Cool'n'Quiet) will also underclock themselves automatically when under a light processing load. Intel has also used this method on their Core 2 Duo processors, through a feature called SpeedStep.
Some processors underclock automatically as a defensive measure, to prevent overheating which could cause permanent damage. When such a processor reaches a temperature level deemed too high for safe operation, the thermal control circuit activates, automatically decreasing the clock and CPU core voltage until the temperature has returned to a safe level. In a properly cooled environment, this mechanism should trigger rarely (if ever).
There are several different underclocking competitions similar in format to overclocking competitions, except the goal is to have the lowest clocked computer, as opposed to the highest.
Advantages
Reduced heat generation (and hence dissipation).
Reduced electrical power consumption.
Considerably longer hardware lifespan.
Increased stability.
Reduced noise from mechanical cooling parts (i.e. removing fan)
In practice
Linux
The Linux kernel, as well as other open-source kernels, include a feature known as CPU frequency modulation. This feature, often known as cpufreq, gives the system administrator a variable level of control over the CPU's clock speed. The kernel includes five governors by default: conservative, ondemand, performance, powersave, and userspace. The conservative and ondemand governors adjust the clock speed depending on the CPU load, but each with different algorithms. The ondemand governor jumps to maximum frequency on CPU load and decreases the frequency step by step on CPU idle, whereas the conservative governor increases the frequency step by step on CPU load and jumps to lowest frequency on CPU idle. The performance, powersave and userspace governors set the clock speed statically: performance to the highest available, powersave to the lowest available, and userspace to a frequency determined and controlled by the user.
Windows
Underclocking can be done manually in the BIOS or with Windows applications, or dynamically using features such as Intel's SpeedStep or AMD's Cool'n'Quiet.
Asus Eee PC
Some versions of the Eee PC uses a 900 MHz Intel Celeron M processor underclocked to 630 MHz.
Apple iPhone
Apple's iPhone and iPhone 3G use the underclocking of a more powerful processor, rather than the full clocking of a less powerful processor, to maximise battery life. The power consumed by using a slower processor is more than using the more powerful processor at a lower speed. In this application, the ARM 1176 processor is underclocked from 620MHz to 412MHz.
Performance
The performance of an underclocked machine will often be better than might be expected. Under normal desktop use, the full power of the CPU is rarely needed. Even when the system is busy, a large amount of time is usually spent waiting for data from memory, disk, or other devices. Such devices communicate with the CPU through a bus which operates at a much lower speed. Generally speaking, the lower the speed of a CPU, the closer its speed will be to that of the bus, and the less power it spends waiting.

NOTE:


free checkbook software


Draw Cartoon Character


The Great Raid


embroidery digitizing software


Free PDA Software


glendale community college


personal trainer certification


dvd Duplication software


church management software


Temperature Monitoring Software


Regression Analysis Software


CNC Training Software


Laser Engraving Software


joliet junior college


polk community college


Generator Sizing Software


Record Management Software


Garage Management Software


Package Design Software


Password Reset Software


mobile workforce software


Fire Protection Software


Financial Management Software


cisco junior college


Network Modeling Software


Data Extraction Software


Materials Testing Software


Visual Effects Software


Gear Design Software


USB Cable Length

Thermal grease

Thermal grease (also called thermal compound, heat paste, heat transfer compound, thermal paste, or heat sink compound) is a fluid substance, originally with properties akin to grease, which increases the thermal conductivity of a thermal interface (by compensating for the irregular surfaces of the components). In electronics, it is often used to aid a component's thermal dissipation via a heat sink.
Basic types
Thermal greases use one or more different thermally conductive substances:
Ceramic-based thermal grease has generally good thermal conductivity and is usually composed of a ceramic powder suspended in a liquid or gelatinous silicone compound, which may be described as 'silicone paste' or 'silicone thermal compound'. The most commonly used ceramics and their thermal conductivities (in units of W/m2 ·K) are:[1] beryllium oxide (218), aluminium nitride (170), aluminum oxide (39), zinc oxide (21), and silicon dioxide (1). Thermal grease is usually white in colour since these ceramics are all white in powder form.
Metal-based thermal grease contain solid metal particles (usually silver). It has a better thermal conductivity (and is more expensive) than ceramic-based grease. It is also more electrically conductive which can cause problems if it contacts the electrical connections of an integrated circuit.
Carbon based. There are some experiments being done with carbon-based greases, using diamond powder[2] [3] or short carbon fibers[4].
Liquid metal based. Some thermal pastes are made of liquid metal alloys of gallium. Rare and expensive.
All but the last classification of compound usually use silicone grease as a medium, a heat conductor in itself, though some manufacturers prefer use of fractions of mineral oil.[citation needed]
Purpose
Thermal grease is primarily used in the electronics and computer industries to assist a heatsink to draw heat away from a semiconductor component such as an integrated circuit or transistor .
Thermally conductive paste improves the efficiency of a heatsink by filling air gaps that occur when the irregular surface of a heat generating component is pressed against the irregular surface of a heatsink, air being approximately 8000 times less efficient at conducting heat (see Thermal Conductivity) than, for example, aluminium, a common heatsink material.[2] Surface imperfections inherently arise from limitations in manufacturing technology and range in size from visible and tactile flaws such as machining marks or casting irregularities to sub-microscopic ones not visible to the naked eye.
As such, both the thermal conductivity and the "conformability" (i.e., the ability of the material to conform to irregular surfaces) are the important characteristics of thermal grease.
Both high power handling transistors, like those in a conventional audio amplifier, and high speed integrated circuits, such as the central processing unit (CPU) of a personal computer, generate sufficient heat to require the use of thermal grease in addition to the heatsink. High temperatures cause semiconductors to change their switpoint of failure while CPU power dissipation overheating causes logic errors as heat raises electrical resistance on the multi-nanometer wide circuits of the CPU core.[3]
Properties
The metal oxide and nitride particles suspended in silicone thermal compounds have thermal conductivities of up to 220 W/(m·K).[1] (In comparison, the thermal conductivity of metals used particle additions, copper is 401 W/(m·K), silver 429 and aluminum 237.) The typical thermal conductivities of the silicone compounds are 0.7 to 3 W/(m·K). Silver thermal compounds may have a conductivity of 2 to 3 W/(m·K) or more.
In compounds containing suspended particles, the properties of the fluid may well be the most important. As seen by the thermal conductivity measures above, the conductivity is closer to that of the fluid components rather than the ceramic or metal components. Other properties of fluid components that are important for thermal grease might be:
How well it fills the gaps and conforms to the component's uneven surfaces and the heat sink
How well it adheres to those surfaces
How well it maintains its consistency over the required temperature range
How well it resists drying out or flaking over time
How well it insulates electrically
Whether it degrades with oxidation or breaks down over time
The compound must also be smooth so that it is easy to apply in a very thin layer.
Applying and removing
For computer CPU applications the grease is often applied on both surfaces with a small plastic spatula or similar device.
The process is generally the same, regardless of brand. Simply put a thin "grain of rice" in the center of the CPU down the middle. Some people prefer to put a small cross in the center, though most high quality compound manufacturers will recommend the "grain of rice" method.
Because thermal grease's thermal conductivity is poorer than the metals they couple, it is important to use no more than is necessary to exclude air gaps. Excess grease separating the metal surfaces further will only degrade conductivity, increasing the chances of overheating. It should also be noted that silver-based thermal grease can also be slightly electrically conductive. If excess were flow onto the circuits, it could cause a short circuit.
The preferred way to remove typical silicone oil-based thermal grease from a component or heat sink is by using isopropyl alcohol (rubbing alcohol). If none is available, pure acetone is also a valid method of removal. There are also purpose made cleaners for removing and purifying the surfaces of the contacts.

NOTE:

colorado springs utilities


Gift Certificate Templates


3D Animation Software


Peachtree Accounting Software


pdf ocr software


Web Development Software


Motorola V3C Software


TV Decoder Software


Guitar Tuner Software


accounting discount software


act discount software


act cheap software


Computers Power Supplies


Wireless Internet Service


Windows XP Home


Unigraphics CAD Software


free sex chat


star trek voyager


salem state college


Fairmont State College


HP Proliant Servers


Vehicle Routing Software


Payment System Software


Heat Transfer Modeling


Data Interface Software


Order Management Software


Manufacturing Simulation Software


Leak Detection Software


Website Development Software


Text Mining Software

Loop heat pipe

A loop heat pipe (LHP) is a two-phase heat transfer device that uses capillary action to remove heat from a source and passively move it to a condenser or radiator. LHPs are similar to heat pipes but have the advantage of being able provide reliable operation over long distance and the ability to operate against gravity. Different designs of LHPs ranging from powerful, large size LHPs to miniature LHPs (micro loop heat pipe) have been developed and successfully employed in a wide sphere of applications both ground based as well as space applications.
Construction
The most common coolants used in LHPs are anhydrous ammonia and propylene[1].
MechanismLimitations of heat pipes
Heat pipes are excellent heat transfer devices but their sphere of application is mainly confined to transferring relatively small heat loads over relatively short distances when the evaporator and condenser are at same horizontal level. This limitation on the part of the heat pipes is mainly related to the major pressure losses associated with the liquid flow through the porous structure, present along the entire length of the heat pipe and viscous interaction between the vapor and liquid phases, also called entrainment losses. For the applications involving transfer of large heat loads over long distances, the thermal performance of the heat pipes is badly affected by increase in these losses. For the same reason conventional heat pipes are very sensitive to the change in orientation in gravitational field. For the unfavorable slopes in evaporator-above-condenser configuration, the pressure losses due to the mass forces in gravity field adds to the total pressure losses and further affect the efficiency of the heat transfer process.
As a result of these limitations, different solutions involving structural modifications to the conventional heat pipe have been proposed. Some of these modified versions of heat pipe incorporated arterial tube with considerably low hydraulic resistance for the return of the liquid to the heat supply zone e.g. arterial heat pipes while others provided spatial separation of the vapor and liquid phases of a working fluid at the transportation section e.g. separated lines heat pipes.
Though these heat pipes were able to increase heat transport length and transferred significant heat flows but still they remain very sensitive to orientation in the gravity field. To extend functional possibilities of two-phase systems towards applications involving otherwise inoperable slopes in gravity field, the advantages provided by the spatial separation of the transportation line and usage of non-capillary artery were combined in the loop scheme. As a result, the loop scheme makes it possible to develop heat pipes with higher heat transfer characteristics while maintaining normal operation at any orientation in the mass force field. The loop principle forms the basis of the physical concept of the Two-Phase Loops (TPLs).
Origins
Loop heat pipes were patented in USSR in 1979 by Valery M. Kiseev, Jury F. Maidanik, Jury F. Gerasimov, all of the former Soviet Union. The patent for LHPs was filed in the USA in 1982 (Patent 4,467,861).
Applications
The first space application occurred aboard a Russian spacecraft in 1989. LHPs are now commonly used in space aboard satellites including; Russian Granat, Obzor spacecraft, Boeing’s (Hughes) HS 702 communication satellites, Chinese FY-1C meteorological satellite, NASA’s ICESat [2].
LHPs were first flight demonstrated on the NASA space shuttle in 1997 with STS-83 and STS-94.
Loop heat pipes are important parts of systems for cooling electronic components.
A good amount of research work in the area of Loop Heat Pipe has been done by a graduate student Praveen Arragattu from University of Cincinnati. It is titled Optimal Solutions for Pressure Loss and Temperature Drop Through the Top Cap of the Evaporator of the Micro Loop Heat Pipe. A copy can be downloaded from Ohiolink ETD.

NOTE:

Wireless Internet Providers


Wireless Networking Products


Web Camera Software


Wireless Computer Networks


DTS sql server


Introduction to XML


Veo Webcam Software


ecommerce hosting solution


THE L WORD


sony handycam software


Economic Analysis Software


Risk Management Software


Regulatory Compliance Software


Data Privacy Software


Thermal Printer Software


Video Encoding Software


Inductor Design Software


Cable Management Software


Gas Flow Modeling


Reservation System Software


Material Management Software


Laser Scanner Software


Database Reporting Software


Optical Design Software


Website Creation Software


Output Management Software


100 DVD R


asian kung-fu generation


dj mixing software


grove city college