Tuesday, July 6, 2010

Buying Guide to Graphics Cards.

ensure you don't block the fan grills on the sides, back, or bottom of the laptop at any time.

2.Handle the screen carefully

Avoid touching or playing with your LCD screen. Yes, it might be fun to watch the waves generated by your finger against the screen, but LCD displays are fragile devices that must be cared for.
Take care when cleaning the screen too, and use only approved cleaning materials.

3.Don't drop it
Whatever you do, don't through your laptop and don't drop your laptop computer! Keep it safe inside of a carrying case when moving around or traveling.
Don't leave it on the edge of a table or on an unstable support of some kind. One ill fated drop to the floor could spell death for your mobile office.

Try to make sure the rubber feet underneath are in good condition and are still attached. This will prevent the device from sliding around accidentally.

What should we consider when buying a Graphics Cards?What the factors that influenced the performance of the Graphics Cards? Below provide some tips on how to we buy a value to money Graphics Cards .Let's get started.

Buying Guide to Graphics Cards.

The graphics card is a vital performance component of your computer, particularly if you play 3D games, or work with graphics and video content. The graphics card sits in an expansion card slot in your PC and it is specifically designed to process image data and output it to your monitor, enabling you to see it. A graphics card works by calculating how images appear, particularly 3D images, and renders them to the screen. 3D images and video images take a lot of processing capacity, and many graphics processors are complex, require fans to cool them and need direct power supply. The graphics card consists of a graphics processor, a memory chip for graphics operations, and a RAMDAC for display output. It may also include video capture, TV output and SLI and other functions. You can find the graphics card that suits you by comparing specification between brands and vendors online.
Graphics Cards.

What are your needs?

The first decision you need to make is whether you need a graphics card for handling 3D images or whether you are simply requiring 2D image rendering. For 2D requirements, you need only a low-cost solution. In many cases, an integrated graphics solution will suffice for 2D applications.

However with 3D graphics, the performance of the graphics card will impact directly on the frame rate and image quality of 3D programs and games. The differences between the low and high-end cards can be substantial, both in cost and performance.

Rendering 3D graphics is like lighting a stage, both the geometry of the shapes in question and the lighting of it need to be taken into account. The geometry of an image calculates the parts of an object that can and can't be seen, the position of the eye and its perspective. The lighting is a calculation of the direction of the light sources, their intensities and the respective shadows that occur. The second part to presenting a 3D image is the rendering of colours and textures to the surfaces of the objects, and modifying them according to light and other factors.

Most modern graphics cards include a small microchip called the Graphics Processing Unit (GPU), which are provide the algorithms and memory to process complex images. They reduce the workload of the main CPU, and provide faster processing. Different graphics cards have different capabilities in terms of processing power. They can render and refresh images up to 60 or more times per second, calculate shadows quickly, create image depth by rendering distant objects at low resolution, modify surface textures fluidly and eliminate pixelation.

What Specifications to Consider

Processor clock speed

This impacts on the rendering capability of the GRU. The clock speed itself is not the critical factor. Rather it is the per-clock performance of the graphics processor, which is indicated by the number of pixels it can process per clock cycle.

Memory size

This is the memory capacity that is used exclusively for graphics operations, and can be as much as 512MB. The more demanding your graphics applications are, the better you will be served with more memory on your graphics card.

16-32M
64M
128M
256M
512M
640M and more

Memory bandwidth

One thing that can slow down 3D graphics performance is the speed at which the computer delivers information to the graphics processor. A higher bandwidth means a faster data transfer, resulting in faster rendering speeds.

Shader model

DirectX Shader Models allows developers control over the appearance of an image as it is rendered on screen, introducing visual effects like multi-layered shadows, reflection and fog.

Fill rate

This is the speed at an image can be rendered or "painted". This rate is specified in texels per second, the number of 3D pixels that can be painted per second. A texel is a pixel with depth (3D). The fill rate comes from the combined performance of the clock speed of the processor and the number of pixels it can process per clock cycle, and will tell you how quickly an image can be fully rendered on screen.

Vertices/triangles

Graphics chips don't work on curves, rather they process flat surfaces. A curve is created by multiple flat planes arranged to look like a curve. 3D objects are created with multiple triangular surfaces, sometimes hundreds or even thousands, tessellated to represent the curves and angles of the real world. 3D artists are concerned with the number of polygons required to form a shape. There are two different types of specification: vertices per second (I.e., angles the triangles), and triangles per second. To compare one measure with the other, you have to take into account the fact that adjacent triangles share vertices.

Anti-aliasing

A technique used to smooth images by reducing the jagged stepping effect caused by diagonal lines and square pixels. Different levels of anti-aliasing have different effects on performance.

RAMDAC

The Random Access Memory Digital to Analogue Converter takes the image data and converts it to a format that your screen can use. A faster RAMDAC means that the graphics card can support higher output resolutions. Some cards have multiple RAMDACs allowing that card to support multiple displays.

TV-out

Some graphics cards provide the option to connect a television via either a composite (RCA) or S-Video connector. TV Out

S-video Out
S-video In and S-video Out (VIVO)
YPbPr Connection for HDTV

DVI

Some graphics cards include a connector for DVI monitors, handy because a lot of LCD screens support DVI. DVI offers better image quality than the standard VGA connector.

Dual-head

Dual-head is a term used when two monitors are used side by side, stretching your desktop across both.

SLI (Scalable Link Interface.)

With SLI you can couple two graphics cards in your computer, enabling each card to take half the rendering thereby doubling the performance.

When considering your graphics card, it pays to think about how much you need your computer to process your graphics output. Using a high end graphics card with a high pixels per clock rating, large memory, fast processor and other features means that you can run the latest games efficiently, or work in intensive graphics development.

Different Models

While there are many vendors of graphics cards, there are actually only two major manufacturers of chips for graphics cards. Nearly every graphics card on the market features a chip manufactured by either ATI or Nvidia. Cards using the same graphics chip will perform roughly the same as each other. However, even though they use the same chip, some feature slightly higher clock speeds, as well as manufacturer guaranteed overclocking-an even higher clock speed than that specified. Other factors that will influence your decision should include the amount of memory a card has (128MB, 256MB, 512MB) and its additional features, such as TV-Out and dual-screen support.

Use the search facilities at Myshopping.com.au to compare the features, prices and vendors of graphics cards.

How to Increase computer performance FREE.

Your PC getting slower and slower?
How to increase or boost your computer performance without spending a lot of money?Does it has any tweaks to tune up the computer without buying or changing new hardware?Well the answer is ,yes it can be achieved without spending too much money to upgrade the computer hardware .

While new technology keep bringing better performance to our PC.We will always need to upgrade the system to get a better user experience. Did you know we can get more out of the old hardware.You current computer system performance might not at the optimal level and what we can do is to optimize it with some minor tweaks.

If you are the Windows users,you might have noticed that after period of time, the operating system keep slowing down and it does not perform as fast as it used to,the time when we fist install it.

The problem that arises most probably due to the lack of regular clean up for the OS. It can be resolved if we follow these ways:

FREE
1. Delete Temp Files: How to clean up the temporary files piled up on your hard disk. It can be done with the help of free tools available.or you can follow these steps.To clean up other temporary files on your computer in Windows xp/vista:Fist Click Start, Programs (or All Programs), Accessories, System Tools, Disk Cleanup; Choose the correct drive usually C:\ ,Check the boxes in the list and delete the files .

Note that these are the steps to clean up the computer temporary file.You should also delete the internet temp file to ensure you are are free from hacker and prevent browser hijacking via cookie.More detail coming soon.(when i have more time:)

2. Clean up the Registry: You should never try to clean up the registry manually unless you know what you are doing.You can use Registry cleaner tools to clean up . Before cleaning up you registry don't forget to create a back-up of your registry prior to modifying it because these tools may or may not back up your registry automatically .You can roll back the registry if there is problem after delete the registry.

3.Defragment Your Disk Drive. When running programs that contains hundreds and thousand of files, fragmenting can degrade the performance of your computer. And what we do here is to defragment it.Regular defragment will enhances the performance of Hard disk .You can use others software for this task or you can use FREE tool that available in Window to run a defragment process. To defragment your hard drive in Windows XP, open My Computer and right-click on the C: drive. Select Properties, then click on the Tools tab and select Defragment Now...It will take some time to defragment.Make sure you are not in hurry.

4. Scan the System: Scanning the system for spyware and cleaning it make huge differences in enhancing your computer's performance. As a result a significant amount of CPU resource will be freed up.

What Is DSL?

We offen hear the term DSL when we try to config our internet connection.So what is DSL?What deffrent between DSL and cable modem?What advantanges and disadvantages of DSL compare to other?

written by:elvin

DSL stands for Digital Subscriber Line. It is a service that makes the use of existing copper telephone wires for delivering data services at extremely fast speed rates. It does not hamper the existing telephone line. You can surf the Internet and talk on the phone, simultaneously.

DSL offers speeds that are around 5 to 25 times higher than a typical 56Kb dial-up connection. It is an always-on type of connection. This implies that websites would load quickly, downloads would be faster, buffering of videos would be fast and smooth and the domain of Online games would be illimitable.

Based on the types of service, DSL can be can be categorized in three divisions which are ASDL, IDSL and SDSL.

ADSL stands for Asymmetric Digital Subscriber Line. It offers download speed of 1.5 Mbps and upload speed of 384 K. In order to acquire a ADSL connection, your location has to be within 3 miles of your local telephone office. Also, a DSL router is needed for this type of connection.

IDSL is a ISDN Digital Subscriber Line service which requires an ISDN router. It provides a connection speed of 144 K. in this type of connection distance is not a component to be considered.

SDSL means Symmetric Digital Subscriber Line. The speeds available under this type of DSL connection depends on the distance between your location and your local telephone office. The speed of downloads and uploads can go up to 1.1 Mbps.

Advantages of DSL

No installation of new wires is required. DSL uses the present telephone line to connect to the Internet. It provides extremely fast connection. Depending on the offer, you would not even have to pay for the DSL modem installation charges, since it is provided free by some of the companies on selection of the appropriate plan. The download rate is much higher in DSL connections. Many business organizations have gained the benefits of DSL. A DSL connection is very secure.

Disadvantages of DSL

The quality of your DSL connection depends on the distance between the DSL providers office and your location. Nearer you are, the better quality connection would you get. So, consumers located far from the local DSL office may face some trouble. DSL provide high speeds for downloading stuff but upload speeds are not that good.

DSL vs Cable Modems

The services provided through a cable modem can sometimes slow down or get hanged. It depends on the number of users accessing that particular service. But, in a DSL connection there is no such problem. The speed of DSL is consistent and high. This does not allow any kind of conjunction on the network. It provides more security than the cable modem connections. The popularity of DSL has risen to new heights which has resulted in disconnections and up gradations of the cable modem connections.

Wireless Keyboards –more freedom, and to help save on space.

Wireless keyboards have became increasingly common and popular. It give the user more freedom, and to help save on space.
vibewireless3
Many of the early prototypes of wireless keyboards did not have a particularly successful career, due to limited battery lifespan and a constant need to replace the batteries soon irked consumers. Advances have now been made which have ensured that the longevity of the batteries are ensured. Whilst keyboards in general are always assumed to be the QWERTY model so often associated with desktop and laptop computers, wireless keyboards have became increasingly common amongst handheld devises such as PDA and other media players.

These wireless keyboards are used to more conveniently play and manipulate media, and include fast forward, rewind, play features as well as volume control and mute function. Miniature QWERTY keyboards which are fully wireless are also available for the likes of PDAs and other hand held devices so as to allow the user a greater ease for entering data and contacts.

Other useful features include a button which can be used to engage the Internet browser, and another dedicated to email. Rather than being a traditional keyboard minus the cords, wireless keyboards are a clear mark in the next stage of evolution of keyboards, designed to be more user friendly as well as of greater, practical benefit to the user. Many wireless keyboards also come with a pointer device akin to he standard mouse which is designed to eliminate the need for a traditional mouse, and this further increases the freedom for the user.

Concerns have been raised however as to the actual security of the wireless keyboards. Wireless keyboards in order to actually work, need to transmit signals from the device to the computer and this has meant that they are open to abuse and exploitation.

Radio frequency (RF) or infrared (IR) signals sent and received from the keyboard and the unit attached to the computer facilitates it to work as wireless keyboard and these can be readily intercepted by hackers. This means that not only can the keyboard itself be compromised, but that the actual integrity of your system can also be compromised as well.


Wireless keyboards are also becoming increasingly eco-friendly, with the latest innovative design feature being the inclusion of solar powered keyboards. This has meant that batteries and plugs are no longer required and has thus guaranteed the smooth operation of the keyboards for a long time to come. With an eco-friendly design, wireless keyboards are fast becoming the latest must have computer item.
Light bulb
By: gamerentalguide
.Giftgadgetgateway.com

For purchase information on wireless keyboards, laser keyboards, and other cool gifts, visit us at Giftgadgetgateway.com

How to Extend Battery Life in Your Laptop.

Here are some tips and tricks that can help to extent your battery life in your laptop .

top-10-coolest-laptop-concepts-hp-laptop-concept-1
1.Avoid running program take up large amount of battery power. For example Movies, at the same time running more than two or three programs which will increase the CPU usage will consume more power As well,

Tips:change the settings as the main source of power by right clicking on the battery icon, or via the control panel. Adjusting these functions can save up to an hour of battery life.


2.Don't use battery power to maintain a safe temperature for the machine. Using, and changing the battery at room temperature can increase the capabilities of the battery and extend the life.


3.Charge your battery completely -charging your battery when there is no power left inside the battery . Partially charging the battery life can decrease the life of the battery substantially. Once the laptop has been plugged in, allow the battery to charge fully for the best effects for the computer, and the battery.

A laptop is a personal computer designed for mobile use and small and light enough to sit on a person's lap while in use.[1] A laptop integrates most of the typical components of a desktop computer, including a display, a keyboard, a pointing device (a touchpad, also known as a trackpad, and/or a pointing stick), speakers, and often including a battery, into a single small and light unit. The rechargeable battery (if present) is charged from an AC adapter and typically stores enough energy to run the laptop for three to five hours in its initial state, depending on the configuration and power management of the computer.
Laptops are usually notebook-shaped with thicknesses between 0.7–1.5 inches (18–38 mm) and dimensions ranging from 10x8 inches (27x22cm, 13" display) to 15x11 inches (39x28cm, 17" display) and up. Modern laptops weigh 3 to 12 pounds (1.4 to 5.4 kg); older laptops were usually heavier. Most laptops are designed in the flip form factor to protect the screen and the keyboard when closed. Modern tablet laptops have a complex joint between the keyboard housing and the display, permitting the display panel to swivel and then lie flat on the keyboard housing.

Laptops were originally considered to be "a small niche market" and were thought suitable mostly for "specialized field applications" such as "the military, the Internal Revenue Service, accountants and sales representatives". But today, there are already more laptops than desktops in businesses, and laptops are becoming obligatory[clarification needed][citation needed] for student use and more popular for general use. In 2008 and 2009 more laptops than desktops were sold in the US

Subnotebook.
A subnotebook, also called an ultraportable by some vendors, is a laptop designed and marketed with an emphasis on portability (small size, low weight and longer battery life) that retains the performance of a standard notebook.[15] Subnotebooks are usually smaller and lighter than standard laptops, weighing between 0.8 and 2 kg (2 to 5 pounds);[10] the battery life can exceed 10 hours[16] when a large battery or an additional battery pack is installed.

To achieve the size and weight reductions, ultraportables use high resolution 13" and smaller screens (down to 6.4"), have relatively few ports (but in any case include two or more USB ports), employ expensive components designed for minimal catapillar size and best power efficiency, and utilize advanced materials and construction methods. Some subnotebooks achieve a further portability improvement by omitting an optical/removable media drive; in this case they may be paired with a docking station that contains the drive and optionally more ports or an additional battery.

The term "subnotebook" is usually reserved to laptops that run general-purpose desktop operating systems such as Windows, Linux or Mac OS X, rather than specialized software such as Windows CE, Palm OS or Internet Tablet OS.

Docking stations.

A docking station is a relatively bulky laptop accessory that contains multiple ports, expansion slots, and bays for fixed or removable drives. A laptop connects and disconnects easily to a docking station, typically through a single large proprietary connector. A port replicator is a simplified docking station that only provides connections from the laptop to input/output ports. Both docking stations and port replicators are intended to be used at a permanent working place (a desk) to offer instant connection to multiple input/output devices and to extend a laptop's capabilities.

Docking stations became a common laptop accessory in the early 1990s. The most common use was in a corporate computing environment where the company had standardized on a common network card and this same card was placed into the docking station. These stations were very large and quite expensive. As the need for additional storage and expansion slots became less critical because of the high integration inside the laptop, port replicators have gained popularity, being a cheaper, often passive device that often simply mates to the connectors on the back of the notebook, or connects via a standardised port such as USB or FireWire.

Standards

Some laptop components (optical drives, hard drives, memory and internal expansion cards) are relatively standardized, and it is possible to upgrade or replace them in many laptops as long as the new part is of the same type, mainly the motherboard.[28] Depending on the manufacturer and model, a laptop may range from having several standard, easily customizable and upgradeable parts to a proprietary design that cannot be reconfigured at all. The replacability/upgradability of the hardware can be announced as possitive by the laptop maker.

In general, components other than the four categories listed above are not intended to be replaceable, and thus rarely follow a standard. In particular, some motherboards, locations of ports, and design and placement of internal components are usually make and model specific. Those parts are neither interchangeable with parts from other manufacturers (replacable) nor upgradeable. If broken or damaged, they must be substituted with an exact replacement part. Those users uneducated in the relevant fields are those the most affected by incompatibilities, especially if they attempt to connect their laptops with incompatible hardware or power adapters.

Intel, Asus, Compal, Quanta and other laptop manufacturers have created the Common Building Block standard for laptop parts to address some of the inefficiencies caused by the lack of standards.

USB hard drives with encryption.

http://pan.fotovista.com/dev/0/6/00003660/l_00003660.jpg

USB hard drives with encryption go by a number of different names, such as flash drives or thumb drives. These devices can fit onto your key ring and can contain a huge amount of storage. There are also memory storage cards that can also be used for this purpose.


As USB hard drives with encryption are easily lost or misplaced due to the compact size, many brands of these devices have the encryption placed directly into the hard drive itself to protect your data. Encryption software is also compatible with the majority of these USB hard drive devices. This security is an essential element to protect the data that you may be storing on one of these handy devices.


For the active business person or college student, A USB hard drive with encryption may be the perfect device to access your personal info anywhere safely.


One advantage that these USB hard drives with encryption have over other storage devices such as floppy discs or CDs is the ability to connect into a USB port, which allows them to be used with any computer system due to most new computers today may contain no floppy disk drives. These USB hard drives with encryption also have much more storage and are more compact and durable than these other memory storage devices.


Most of these USB hard drives with encryption use a standard type-A USB connection that can be plugged into any USB port on a computer system. The device contains a small circuit board that is encased in durable metal or plastic and the connectors are either protected by a plastic removable cover or retract into the casing itself. The computer system provides the power source for the USB hard drives with encryption.


The Usage


The most common use for these USB hard drives with encryption is to transport and store personal info. for example when medical records can be stored on a flash drive that will allow any hospital to access a person's medical history in the case of an emergency if the patient is unable to provide the information himself.


USB hard drives with encryption can also be used as a backup to store computer files in case the host device experiences a virus or other type of mechanical failure. Antivirus software can also be transferred from a flash drive into an infected host as well as storing a large amount of the host's data while that system is shut down, making it an invaluable tool in making essential computer repairs.


As USB hard drives with encryption are easily lost or misplaced, many brands of these devices have the encryption placed directly into the hard drive itself to protect your data. Encryption software is also compatible with the majority of these USB hard drive devices. This security is an essential element to protect the data that you may be .storing on one of these handy devices.


Know more on RAM--computer memory.

A device that holds data for a computer for a short period of time is called computer memory. If you have spent even just a little time with any computer, you will hear about RAM, Hard Disks, or CD's and DVD's. All these are means to store the information from a computer. The capacity and speed of storage varies much in these kinds of memory.

The Central Processing Unit of a computer is connected to the main memory. This main memory component is useful in storing the data and programs that are currently being run in the CPU. In modern computers the random access memory (RAM), a solid-state memory, is attached to the CPU using a memory bus. A memory bus is also called an address bus.

In addition to RAM, there may be a cache memory also, which contains small chunks of information that is to be used by the CPU soon. The idea is to reduce the fetching time, and thereby speed up the working of the CPU. Cache memory increases the throughput of the CPU, affecting the performance of the computer. In general, the RAM is the most important part of computer memory. RAM is made from integrated semiconductor chips.

RAM and Other Types of Memory:

The primary characteristic that separates RAM from other types of memory is the fact that any memory location RAM can be accessed at almost the same speed in a random manner. This is vastly different from the accessing approach used in other devices like the sequential one, used in tapes etc. Some RAM chips are volatile, meaning that as soon as the power supply is switched off to these chips, they lose all the data.

Some computers use a shadow RAM as well, which copies data present in ROM to make data availability faster, thereby helping increase the computer's speed and efficiency.

Cost:

In earlier times, the cost of computer hardware was enormous, so whenever an upgrade was required, it was much cheaper to buy the parts and do an upgrade rather than to buy a new computer. This has changed. Now the cost of hardware has gone down, and it is easier to replace the whole computer than to buy a few parts.

This fall in prices is applicable to the desktop computers used by simple homeowners only. Replacing servers and high-end computers is still very expensive, so in the case where any new function or capacity expansion is required, it is easier and cheaper to replace parts such as computer memory.

RAM Configurations:

RAM chips are available in various configurations ranging from 128 MB to 1 GB for laptops and desktops. For normal usage, this much RAM is enough. If you wish to play many games, watch movies, or work on graphics, then you will need to have more RAM as all these applications are memory hungry. In such cases, buying a new system altogether is not a very logical solution. All the computers and laptops come with internal expansion slots that can be used to expand the memory.

An optimal amount of computer memory is essential, as unlike earlier times, our memory needs have grown. Normally, we have an email client, messenger, and music already running when we sit down to work on our computers. All of these have taken their share of memory already, leaving little behind for the application we need to launch for our work. These days 1gz RAM is the recommended norm.

Hardware Informatione.

In computing, an interrupt is an asynchronous signal indicating the need for attention or a synchronous event in software indicating the need for a change in execution.
A hardware interrupt causes the processor to save its state of execution and begin execution of an interrupt handler.
Software interrupts are usually implemented as instructions in the instruction set, which cause a context switch to an interrupt handler similar to a hardware interrupt.
Interrupts are a commonly used technique for computer multitasking, especially in real-time computing. Such a system is said to be interrupt-driven.

Types of Interrupts.
Level-triggered
A level-triggered interrupt is a class of interrupts where the presence of an unserviced interrupt is indicated by a high level (1), or low level (0), of the interrupt request line. A device wishing to signal an interrupt drives line to its active level, and then holds it at that level until serviced. It ceases asserting the line when the CPU commands it to or otherwise handles the condition that caused it to signal the interrupt.
Typically, the processor samples the interrupt input at predefined times during each bus cycle such as state T2 for the Z80 microprocessor. If the interrupt isn't active when the processor samples it, the CPU doesn't see it. One possible use for this type of interrupt is to minimize spurious signals from a noisy interrupt line: a spurious pulse will often be so short that it is not noticed.
Multiple devices may share a level-triggered interrupt line if they are designed to. The interrupt line must have a pull-down or pull-up resistor so that when not actively driven it settles to its inactive state. Devices actively assert the line to indicate an outstanding interrupt, but let the line float (do not actively drive it) when not signalling an interrupt. The line is then in its asserted state when any (one or more than one) of the sharing devices is signalling an outstanding interrupt.
This class of interrupts is favored by some because of a convenient behavior when the line is shared. Upon detecting assertion of the interrupt line, the CPU must search through the devices sharing it until one requiring service is detected. After servicing this device, the CPU may recheck the interrupt line status to determine whether any other devices also need service. If the line is now de-asserted, the CPU avoids checking the remaining devices on the line. Since some devices interrupt more frequently than others, and other device interrupts are particularly expensive, a careful ordering of device checks is employed to increase efficiency.
There are also serious problems with sharing level-triggered interrupts. As long as any device on the line has an outstanding request for service the line remains asserted, so it is not possible to detect a change in the status of any other device. Deferring servicing a low-priority device is not an option, because this would prevent detection of service requests from higher-priority devices. If there is a device on the line that the CPU does not know how to service, then any interrupt from that device permanently blocks all interrupts from the other devices.
The original PCI standard mandated shareable level-triggered interrupts. The rationale for this was the efficiency gain discussed above. (Newer versions of PCI allow, and PCI Express requires the use of message-signalled interrupts.)

Hybrid
Some systems use a hybrid of level-triggered and edge-triggered signalling. The hardware not only looks for an edge, but it also verifies that the interrupt signal stays active for a certain period of time.
A common use of a hybrid interrupt is for the NMI (non-maskable interrupt) input. Because NMIs generally signal major – or even catastrophic – system events, a good implementation of this signal tries to ensure that the interrupt is valid by verifying that it remains active for a period of time. This 2-step approach helps to eliminate false interrupts from affecting the system.

Edge-triggered
An edge-triggered interrupt is a class of interrupts that are signalled by a level transition on the interrupt line, either a falling edge (1 to 0) or a rising edge (0 to 1). A device wishing to signal an interrupt drives a pulse onto the line and then releases the line to its quiescent state. If the pulse is too short to be detected by polled I/O then special hardware may be required to detect the edge.
Multiple devices may share an edge-triggered interrupt line if they are designed to. The interrupt line must have a pull-down or pull-up resistor so that when not actively driven it settles to one particular state. Devices signal an interrupt by briefly driving the line to its non-default state, and let the line float (do not actively drive it) when not signalling an interrupt. This type of connection is also referred to as open collector. The line then carries all the pulses generated by all the devices. (This is analogous to the pull cord on some buses and trolleys that any passenger can pull to signal the driver that they are requesting a stop.) However, interrupt pulses from different devices may merge if they occur close in time. To avoid losing interrupts the CPU must trigger on the trailing edge of the pulse (e.g. the rising edge if the line is pulled up and driven low). After detecting an interrupt the CPU must check all the devices for service requirements.
Edge-triggered interrupts do not suffer the problems that level-triggered interrupts have with sharing. Service of a low-priority device can be postponed arbitrarily, and interrupts will continue to be received from the high-priority devices that are being serviced. If there is a device that the CPU does not know how to service, it may cause a spurious interrupt, or even periodic spurious interrupts, but it does not interfere with the interrupt signalling of the other devices. However, it is fairly easy for an edge triggered interrupt to be missed - for example if interrupts have to be masked for a period - and unless there is some type of hardware latch that records the event it is impossible to recover. Such problems caused many "lockups" in early computer hardware because the processor did not know it was expected to do something. More modern hardware often has one or more interrupt status registers that latch the interrupt requests; well written edge-driven interrupt software often checks such registers to ensure events are not missed.
The elderly Industry Standard Architecture (ISA) bus uses edge-triggered interrupts, but does not mandate that devices be able to share them. The parallel port also uses edge-triggered interrupts. Many older devices assume that they have exclusive use of their interrupt line, making it electrically unsafe to share them. However, ISA motherboards include pull-up resistors on the IRQ lines, so well-behaved devices share ISA interrupts just fine.

Performance issues

Interrupts provide low overhead and good latency at low offered load, but degrade significantly at high interrupt rate unless care is taken to prevent several pathologies. These are various forms of livelocks, when the system spends all of its time processing interrupts, to the exclusion of other required tasks. Under extreme conditions, a large number of interrupts (like very high network traffic) may completely stall the system. To avoid such problems, an operating system must schedule network interrupt handling as carefully as it schedules process execution.

Difficulty with sharing interrupt lines

Multiple devices sharing an interrupt line (of any triggering style) all act as spurious interrupt sources with respect to each other. With many devices on one line the workload in servicing interrupts grows in proportion to the square of the number of devices. It is therefore preferred to spread devices evenly across the available interrupt lines. Shortage of interrupt lines is a problem in older system designs where the interrupt lines are distinct physical conductors. Message-signalled interrupts, where the interrupt line is virtual, are favoured in new system architectures (such as PCI Express) and relieve this problem to a considerable extent.
Some devices with a badly-designed programming interface provide no way to determine whether they have requested service. They may lock up or otherwise misbehave if serviced when they do not want it. Such devices cannot tolerate spurious interrupts, and so also cannot tolerate sharing an interrupt line. ISA cards, due to often cheap design and construction, are notorious for this problem. Such devices are becoming much rarer, as hardware logic becomes cheaper and new system architectures mandate shareable interrupts

Monday, July 5, 2010

Bundling Hardware and Software to Do Big Jobs.

In data-center computing, the big trend today is to move from building blocks to bundles.

John Marshall Mantel for The New York Times

Rodney Adkins, senior vice president for systems and technology at I.B.M., which is ahead in developing customized systems.

Suppliers are offering customers assembled bundles of hardware and software to make it easier and less expensive for customers to cope with the Internet-era surge in data — an information flood coming from internal databases, but also from Web-based collaboration and smartphone applications, sensors that monitor electrical use, environmental contamination and food shipments, even biological and genetic research.The shift to packaging hardware and software together is behind the recent big deals and partnerships in the technology industry: Oracle’s purchase of SunMicrosystems for $7.4 billion, an alliance between Hewlett-Packard and Microsoft announced last month, and a similar partnership between Cisco Systems and EMC.


But computer scientists at universities and technology companies say that simply putting the hardware and software building blocks together more efficiently for customers is not enough.


“The huge challenge is to take all this data and generate useful knowledge from it,” said Kunle Olukotun, a computer scientist at Stanford. “It’s an enormous opportunity in science and business, but it also presents a massive computing problem.”


The path to intelligently mining the explosion of data, Mr. Olukotun said, involves new approaches to breaking down computing tasks into subtasks that can be processed simultaneously — a concept known as parallel computing — and new system designs optimized for specific kinds of work.


Designing computer systems around the work to be done is a departure from the dominant approach of general-purpose design, in which machines are built to be capable of handling all kinds of chores and are then programmed to do specific tasks.

Several companies are beginning to bring workload-optimized systems design into the mainstream of corporate and government computing. The promise, analysts say, is to not only open the door to exploiting the data flood for competitive advantage, but also to reduce energy costs and help automate the management and administration of computer systems — a labor-intensive expense that is rising four times faster than the cost of hardware.

I.B.M., according to industry analysts, is at the forefront of the effort to develop more customized systems. And on Monday, the company is making the first of a series of announcements this year that embody the new approach.


I.B.M. is introducing a line of big computer servers using its Power 7 32-core microprocessors. They are priced at $190,000, typically run Unix or Linux, and are aimed at industries like finance and utilities, as well as scientific researchers. Next month, I.B.M. plans to unveil far less costly server systems based on industry-standard microprocessors made by Intel. Those machines, which typically run Unix or Microsoft Windows, will be used for Web collaboration, e-mail and other applications.

“These are not simply hardware products, but the result of years of work and investment at every level from the silicon up through the software,” said Rodney C. Adkins, I.B.M.’s senior vice president for systems and technology. “And the real challenge is to optimize it all, not just the hardware.”

The early deployment of so-called smart utility grids points to the challenges of handling ever-vaster amounts of data. Smart electric meters can measure room temperatures and energy use hourly or at 15-minute intervals instead of the old pattern of utility service workers reading electro-mechanical meters every month or two.

The goal of smart grids, which governments are starting to heavily subsidize, is to give households and businesses timely information so they can change their electricity consumption habits to reduce energy use and pollution, and save money.


That involves not only collecting the data, but also analyzing and presenting it to consumers in ways that are easily understood — typically a personalized Web site graphically showing household electricity consumption and pricing.

EMeter, a maker of smart-grid software in San Mateo, Calif., said that using I.B.M.’s workload-tuned P-7 systems should more than double its capacity to manage smart meters, bringing it up to 50 million. In one eMeter project, the utilities in Ontario are going to install 4.5 million smart meters by 2011. Before, meters were read once every month or two. Under the digital system, readings will be made hourly, a hundredfold increase in the data generated.

“If you can’t continually measure energy use down to the granular level in homes and businesses, the smart grid doesn’t work,” said Scott Smith, director of technical solutions at eMeter. “You need a tremendous amount of computing power to do that at scale.”

Computer scientists, biologists and researchers from Rice University and the Texas Medical Center have been working with I.B.M. scientists to fine-tune P-7 systems for cancer research. Genetic and protein-folding simulations require vast amounts of specialized high-speed processing and computer memory, said Kamran Khan, vice provost for information technology at Rice.

I.B.M., he said, dispatched three Ph.D. biologists to work with the cancer researchers. “They really understand computational biology,” Mr. Khan said.

The more bespoke approach to computer-systems design does increase the risk that customers are locked into one or two powerful companies. Indeed, the reason data-center customers long preferred the building-block approach to hardware and software was that it guaranteed competition among suppliers.

But the tradeoff has been that customers had to put the hardware and software together themselves, even as computing complexity and data-handling demands surged. Many companies, it seems, are willing to accept less competition among suppliers for the convenience and cost savings from prepackaged systems.

“It’s a balancing act,” said Frank Gens, chief analyst at IDC. “It does take away some choice, but also makes things a lot simpler. And that, after all, is the model that Apple used so successfully in consumer technology.”

Components Of A Personal Computer.

Processor (CPU)

CPU

The CPU, also called the processor or microprocessor, is the brain of the PC. The processor receives data input by the user, processes information and executes commands.


RAM

RAM

RAM (Random-access Memory) is memory modules that are are installed on the motherboard that contain microchips which hold data and programs while the CPU processes both. Data in RAM is only held temporarily and will be lost when the PC is turned off.


Motherboards (Mobo)

Motherboards

Also called the system board, the motherboard is the main circuit board of the PC. The CPU, RAM, expansion cards, storage devices and optical drives are all plugged into the motherboard.


Video Cards

Video Cards

A video card, also called a display adapter, is an expansion card installed into either a PCI, AGP or PCI-E slot on the motherboard and gives the PC display capabilities.
  • PCI-E

  • AGP

  • PCI


Hard Drives (HDD)

Hard Drives

The hard drive is the main secondary storage device in a PC. Hard drives contain magnetic coated platters that rotate at high RPMs. Hard drives use one or more round magnetic disks called platters, read/write heads controlled by an actuator, and a spindle on which the platters rotate.


Power Supply

Power Supplies

A power supply is the small, metal box usually located at the back of the PC that converts the AC current from your home to the DC current needed by the PC. A quality power supply is essential to a PC, especially for those who overclock or have numerous components and multiple hard drives installed.


Peripherals

Peripherals

A peripheral is a device that communicates with the CPU, but is not essential for the PC to operate. This includes a wide variety of PC components including monitors, keyboards, mice, printers, game controllers, scanners, media drives and speakers.


Monitors

Monitors

A monitor is the display device used to output the video that is processed by a video card. The most common PC monitor technologies are CRT (Cathode Ray Tube) and LCD (Liquid Crystal Display). LCD panels use a liquid crystal material made of polarized molecules to produce an image. An electric current is passed through the liquid and causes the crystals to align so that light...


Laptop versus a desktop pc.How to choose?

You might find yourself asking that question many times while shopping for a computer. This guide offers you the pros and cons of owning a laptop versus a desktop pc.


Processing speed
Comparing processing speeds, laptops usually lag behind their desktop counterparts. With the rapid advance in microchip technology, the gap between them will become smaller.
Wireless
Most laptops especially those with Intel mobile chips come with wireless capability out of the box. This means you can get online from any location at home easily without ugly wires if you have a wireless network setup at home. Desktop pcs do not typically provide this capability out of the box although that may change in the near future.


Memory
Memory chip tends to be more expensive in Laptop than desktop pcs. If you buy a laptop with less than 512MB ram, be prepare to pay more for memory upgrades than you have to with a desktop pc.
Graphics Display
Because of the size of a laptop, most business or entry level laptop use integrated graphics with limited ram. This means most laptops even some expensive ones cannot run graphics intensive applications or 3d games as well as a desktop pc. With a desktop pc, you can buy a dedicated graphics card just to serve a graphics intensive application.
green-laptopPortability
Portability is why everyone wants a laptop these days. Because of their size and weight, it is easier to carry a laptop around as opposed to a desktop pc.
Screen Display
Everyone buy laptop for their portability so laptops usually do not come with screens as big as their desktop counterparts. The screen technology used is usually not as good as those used by desktop pc. Furthermore with a desktop pc, you can always upgrade to a bigger and better screen whereas for laptop you are stuck with the same screen display for the whole lifespan of the laptop.
Upgradeability
Laptops do not offer many upgrade options. You can only upgrade memory and hard disk. With a desktop you can upgrade almost anything and only limited by the motherboard. This means a cheap desktop pc offers a longer lifespan than a laptop.
So whether you should buy a cheap laptop or a cheap desktop, ask yourself what are your needs? If you want to be able to use a computer wherever you go, then you are looking at a laptop to fulfill your needs. However if you do not require the portability of a laptop, play a lot of 3D games, graphic intensive applications, if you care about upgradeability to prolong the lifespan of your investment, then desktop pc is a smarter choice for you.


Professional Data Recovery Software By Nucleus Data Recovery.

Data recovery software is the best means and helpful medium to recover data in the data loss situations. Always ensure to select best data recovery software for your valuable hard drive.

Data recovery is not a familiar word with individuals in normal day to day life. The importance of data recovery software comes into existence when an individual experiences data loss situations. He then hopes to get his lost and inaccessible data recovered as soon as possible but has very less knowledge about the data recovery software.
Data Recovery Software are mainly developed to help individuals recover data lost due to virus attacks, hard disk crashes, improper system shutdowns, media errors, accidental deletion of files and folders, fire and water damages, loss due to power outages and many un-defined and unknown reasons.

Is Data Recovery Software Useful?
Yes, data recovery software does prove useful.

There are ‘n’ number of data recovery software provider companies existing all around the world, which provide powerful and effective data recovery software range to recover and restore data. The software range is efficient enough to get back your lost data. The data recovery becomes easy and helpful process to have the data back to your computer.
How to select a Data Recovery Company?

As the data recovery industry is emerging very fast, simultaneously are emerging the false and scam artists who declare to provide the data recovery software or data recovery services but at last land you up in paying heavily for the software or service.
As you perform a complete check before purchasing any personal commodity likewise you should also enquire about the company from where you decide to buy data.
recovery software. Here I would help you out with some checks which should be performed before selecting the data recovery company and my recommendation for a great one.

1. Clean Room :
To recover data from the corrupt and damaged hard drives, it’s necessary that the data recovery company should be equipped with Class 100 Clean room to perform data recovery.
Clean room environment with biometric security is required to work on corrupt hard drives as they are extremely sensitive devices. Here 100 rating for clean room refer to the number of micro particles per cubic foot of air and this becomes the safe and secure environment for corrupt hard drive recovery.

2. Clientele :
Analyze the clients of the data recovery company which you select to contact for data recovery software. The best way to judge is by the people who use them. Check the client testimonials – the more, the better.

3. Data Recovery Methods :
Check for the data recovery techniques which data recovery software performs during the recovery procedure. Does the hard drive recovery software use non-destructive methods or the technicians just start the data recovery process on the damaged hard drive. If the technician starts this process on your corrupt hard drive then you can immediately know that the company is not genuine as while doing this it can permanently damage platters and kill your data.
" Data recovery should always be performed on a good device after which the recovered data can be transferred to an external media. "

4. Percentage of Data Recovery:
Most companies declare 80 % to 90 % of data recovery chances. Always ask for the detailed figures about the said percentage. If the company is true to its figures then it will explain you the entire percentage but if not, you can discover that too – they might not include the cases of drives from which they were not able to recover data, and may declare partial recoveries as complete recoveries.
Always remember to ask questions and make enquiries about the company and hard drive recovery software, as these questions can help you to get back your data with safe means and do help your pocket from heavy expenditures.
Nucleus Data Recovery is such an organization which provides best and professional data recovery software. The data recovery software range offered is fast, technically advanced and safe for your needs of data recovery.

What to consider when puchase a CPU.



What is CPU?What terms to consider when buying a CPU?


The CPU (central processing uni) It's the brains of the computer. Sometimes referred to simply as the central processor,but more commonly called processor, the CPU is where most calculations take place. In terms of computing power, the CPU is the most important element of a computer system.

Multi-Core Processors.
Multi-core refers to a CPU that includes more then one complete execution cores per physical processor. It combines multi (Minimum two) processors and their caches and cache controllers onto a single integrated circuit (silicon chip). It is basically two /more than two separate processors in one box.

The trend is expanding very fast .Currently,The dual core are what most popular choice .
Dual-core processors are well-suited for multitasking environments because there are two complete execution cores instead of one.

Judging from the current trend, it is with the mainboard, memory and other computer accessories of the rapid upgrade to support the Multi core Processors platform .The trend will gradually enter the mainstream application areas, the decline in prices of four core processors will also pave the way for the popularity.


FSB
The FSB is a short form for Front Side Bus.It's the connection between the CPU and system memory.The Front Side Bus operates at a speed which is a percentage of the CPU clock speed.The faster the speed,the better the performance of the CPU.


The Cache
The purpose of a cache is to enable the CPU to access recently used information very quickly.
A cache will significantly affect CPU performance..A cache operates at a certain speed in (kb) A typical L1 cache is 256Kb and a typical L2 cache is 1MB.The larger the cache,The larger the cache, the better the system performance boost.But it also depends on the CPU. Some caches operate at the full speed of the CPU, while others operate at half that speed or less.

The Battle is between AMD vs. Intel.


As the CPU database currently stands, there is a wide array of processors that you can compare although the list of applications you can compare may leave something to be desired.

6 Things You Must Know about Fiber Optic Cable .

Outdoor fiber cables must endure harsh environment factors such as UV radiation from sunlight, storms, snows and 80 mph wind, so outdoor cables must be strong, weatherproof and UV resistant. The outdoor cable should also be able to endure the wild temperature variations both during installation and throughout its life span.


These factors determine the materials used for the cable construction. Various materials are used to suit the installation environment.

Outdoor cable jacket is treated to prevent UV light from penetrating inside the cable and damaging the internal glass fibers. Extra UV protection specification can be specified if needed.

:: Environment requirement on indoor fiber optic cables

Indoor fiber cables should be strong and flexible for easy pulling and installation. They should also possess NEC required fire and smoke ratings. As a industry standard practice, single mode fiber jacket is yellow and multimode fiber jacket is orange.

:: Popular cable materials in fiber optic cable construction

1. PVC (Polyvinyl Chloride)

Features:

1) Good resistance to environmental effects. Some formulations are rated for -55°C to +55°C.
2) Good flame retardant properties. Can be used for both outdoor and indoor fiber optic cables.
3) PVC is less flexible than PE (Polyethylene)

2. PE (Polyethylene)

Features:

1) Popular cable jacket material for outdoor fiber cables
2) Very good moisture and weather resistance properties
3) Very good insulator
4) Can be very stiff in colder temperatures
5) If treated with proper chemicals, PE can be flame retardant

3. Fluoropolymers

Features:

1) Good flame-resistance properties
2) Low smoke properties
3) Good flexibility
4) Most often used for indoor fiber cables

4. Kevlar (Aramid Yarn)

Aramid yarn is the yellow fiber type material found inside cable jacket surrounding the fibers. It can also be used as central strength members.

Features:

1) Aramid yarn is very strong and is used in bundle to protect the fibers
2) Kevlar is a brand of aramid yarn. Kevlar is often used as the central strength member on fiber cables which must withstand high pulling tension during installation
3) When Kevlar is placed surrounding the entire cable interior, it provides additional protection for the fibers from the environment

5. Steel Armor

Steel armor jacket is often used on direct burial outdoor cables and it provides excellent crush resistance and is truly rodent-proof. Since steel is a conductor, steel armored cables have to be properly grounded and loss fiber optic cable's dielectric advantage.

Applications:

1) Outdoor direct burial cables
2) Fiber cables used for industrial environment where cables are installed without conduits or cable tray protection

Features:

1) Provides excellent crush resistance for outdoor direct burial cables
2) Protects cables from rodent biting
3) Decreases water ingress into the fiber which prolongs the fiber cable's life expectancy

6. Central Strength Member

For large fiber count cables, a central strength member is often used. The central strength member provides strength and support to the cable. During fiber optic cable installation, pulling eyes should always be attached to the central strength member and never to the fibers. On fiber splice enclosure and patch panel installations, the cable central strength member should be attached to the strength member anchor on the enclosure or patch panel.

How to Buy a Flash-Based MP3 Player.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8JNVwsuB-WuZp8M-GKaobLP_oMvTv-EmAsWJDvPrG2j7rBsgYcMnCKmwlqTYL9Ss5Dy1MZjDaDKN5TBzchqa3_cAMjwiW7iqPRsn-bHArVXB5whv-Bjj1Ut2A5_4qn3wvoo1G4ie7hC1O/s400/cross-mp3-player.jpg

The Flash drive/ USB flash drive ,we use it for storing data with the capacity up to few GB.it's a portable back up device used for storage and transportation of data and applications.How about the Flash device come with the MP3 function?It store data at the mean time it a mp3 player.The flash-based MP3 player .

Intro by:e.elvin

They are the choice for listening to personal music, hours of it.. With no delicate moving parts, flash mp3 players can withstand the most rigorous workouts

How to Buy a Flash-Based MP3 Player- by Michael Kobrin

The flash-based MP3 player market has gotten fairly complex, thanks to an ever-changing cast of characters and an explosion of new features. But there is a path through the maze, and it begins with understanding what you need, and what you're willing to spend.

Let's start with what you need: something that works with whatever computer you have. If you have a Windows-based PC, all players are compatible, including Apple iPods. If you have a Mac, you've got two options: an iPod, or a device that is USB Mass Storage Class–compliant (USB MC), meaning it can be plugged into a computer without software installation and be managed via drag-and-drop. Many popular players work only with Windows Media Player (WMP) on machines running Windows, and some USB MC players can work with WMP as well.

You should also consider how tech-savvy you (or the person you're buying for) are. There are no-fuss-no-muss players that you simply take out of the box, plug in to your computer, and drag music onto; then there are players that require a little bit of fiddling around with software like iTunes or Windows Media Player—and in some cases, with additional proprietary software. USB MC players are best for nontechies who don't have huge music collections. Of course, iPods are also very easy to use, thanks to their excellent integration with iTunes. Devices that work only with Windows Media Player tend to be somewhat more challenging to manage because the hardware/software integration isn't as polished.

Advertisement
If you plan on using online download or subscription services like Napster and Yahoo! Music Unlimited, you need a player that is compatible with Microsoft's PlaysForSure (aka Windows Media DRM 10). Note, too, that tracks purchased from Apple's iTunes Music Store will work only on iPods.

Right Hear, Right Now It's best to figure out in advance how much listening you plan to do per day and how long you're likely to have to go in between charges. Some MP3 players use alkaline batteries, which may be the best solution for those who are away from computers or outlets for extended periods of time. Also, many players don't come with an AC adapter, so you have to charge them via a PC's USB port. The average player gets between 12 and 16 hours of battery life, though a few can go for several days before running out of juice.

Next, figure out what you want in terms of physical size, capacity, and features. (If you don't care about how big or what shape the player is, skip directly to capacity.) Style mavens will want something thin and sleek like the iPod nano or Samsung Yepp YP-Z5 that can slip into any pocket without ruining the lines of your clothes.

Flash players' capacity currently tops out at 4GB with the lone exception of SanDisk's 6GB player. Unless you use your MP3 player only at, say, the gym, you'll probably want at least 1GB of storage space. The general belief is that you can fit 250 songs per gigabyte, but this depends entirely on the quality of your music files. In real-world usage, you can expect from 200 to 225 songs per gigabyte.

The iPod nano and iPod shuffle aren't the most feature-rich players out there, but they do what they do very well in terms of sound quality and ease of use. That said, the sound quality of most MP3 players is more than adequate, though I do run across some that just don't cut it. Ease of use is another matter: Apple's click wheel is still king, though manufacturers like iriver are looking to change that with some interesting new designs. I dislike the little joysticks and touchpads on many players because they're just not very precise; I prefer buttons, with very few exceptions.

Aside from music playback, the three most important extra features for many users are photo/video playback, recording, and FM radio. Handy features like photo playback, FM radio, and voice recording are quickly becoming standard on MP3 players, as are less-useful features such as line-in recording. If you like to watch movie trailers or other short video clips, there are a host of video-capable flash players, though none of them have impressed me with the quality and smoothness of video (not to mention that the screens tend towards the eye-squintingly tiny). Language learners and aspiring musicians may want to look for other niche features like A-B repeat, which lets you loop a section of a song or audiobook.

Right on Tracks All digital music players can play MP3 files, which is sufficient for most users. Be aware, however, that iPods won't play WMAs, and very few non-Apple players play AAC files (and none but iPods play protected AAC tracks). If you're concerned with getting the best sound quality possible, and you're planning to upgrade the player's headphones for something better, you may want your player to support a lossless compression format like Apple Lossless (iPods only), FLAC (supported by a few non-Apple players), or WMA Lossless (supported only by a small handful of Windows Mobile-based players).