Monday, November 10, 2008

Non Volatile Random Access Memory (NVRAM)

NVRAM stands for Non Volatile Random Access Memory. It is a type of random access memory which does not lose its information when power is switched off. Now a days most commonly form of random memory access used SRAM or DRAM both require continual power in order to maintain their data. The NVRAM is a small 24 pin DIP (Dual Inline Package) integrated circuit chip and is thus able to obtain the power needed to keep it running from the CMOS battery installed in your motherboard. NVRAM is therefore a type of non-volatile memory that offers random access. There are two types of NVRAM.

One type of NVRAM is EEPROM that is Electrically Erasable Programmable Read-Only Memory circuit chips which maintain its information when power is switched off. In this case, NVRAM is composed of a combination of SRAM and EEPROM chips incorporated into a single semi-conductor die.

Another type of NVRAM is SRAM that is made non-volatile by connecting it to a constant power source such as a battery. Since SRAM requires continual power supply in order to save its contents, an NVRAM that is made from an SRAM will need to use an available power supply to make sure it continues working.

Advantages of NVRAM :-

1.NVRAM's serve applications that require high-speed write/read operations with non-volatile memories such as parallel processing controllers for antilock braking systems and LANs..
2.NVRAM chips work as SRAM.
3.NVRAM chips does not require much power and backup can be guaranteed for up to ten years.
4.The performance of NVRAMs is superior in comparison to other NVM products

Disadvantages of NVRAM :-

1.If the CMOS chip is not making proper contact with the motherboard's contacts. At this situation NVRAM would not be successful.
2.If the battery embedded in the NVRAM chip fails, then this means that your system clock will stop running and important system configuration information may not be maintained.

Read more:

NAS Knowledge Base

Application of Photo Sensors in Smoke Detectors

Smoke sensor

There are two main types of smoke detectors: ionization detectors and photoelectric detectors. A smoke alarm uses one or both methods, sometimes plus a heat detector, to warn of a fire. The devices may be powered by a 9-volt battery, lithium battery, or 120-volt house wiring. Here let us discuss about photo

Photoelectric Detectors

In one type of photoelectric device, smoke can block a light beam. In this case, the reduction in light reaching a photocell sets off the alarm. In the most common type of photoelectric unit, however, light is scattered by smoke particles onto a photocell, initiating an alarm. In this type of detector there is a T-shaped chamber with a light-emitting diode (LED) that shoots a beam of light across the horizontal bar of the T. A photocell, positioned at the bottom of the vertical base of the T, generates a current when it is exposed to light. Under smoke-free conditions, the light beam crosses the top of the T in an uninterrupted straight line, not striking the photocell positioned at a right angle below the beam. When smoke is present, the light is scattered by smoke particles, and some of the light is directed down the vertical part of the T to strike the photocell. When sufficient light hits the cell, the current triggers the alarm.


Read more:

* NAS Knowledge Base
* Beganto

Hall Effect Sensor

Hall Effect Sensor is a transducer which varies the output voltage according to change in magnetic field.

Electricity carried through a conductor will produce a magnetic field that varies with current, and a Hall sensor can be used to measure the current without interrupting the circuit. Typically, the sensor is integrated with a wound core or permanent magnet that surrounds the conductor to be measured. One very important feature of the Hall effect is that it differentiates between positive charges moving in one direction and negative charges moving in the opposite.

How Hall Effect Sensor Works: -

Hall Sensor usually have three pins and below is the related diagram in fig1.

The Hall Effect refers to the potential difference (Hall voltage) on opposite sides of a thin sheet of conducting or semiconducting material through which an electric current is flowing, created by a magnetic field applied perpendicular to the Hall element. Here we have an amplifier in the cicuitry, it amplifies the signal carried by the Hall Element because the Hall effect sensor produce a very low signal.

Read more:

* NAS Knowledge Base
* Engineering Services

Monday, September 29, 2008

EEPROM (Electrically Erasable Programmable Read-Only Memory)

Introduction:-

EEPROM stands for Electrically Erasable Programmable Read-Only Memory. EEPROM is a type of non volatile memory which is used in computers or electronic devices to maintain the data when power is turned off but in case SRAM or DRAM information may be lost when power is switched off. To store the larger amount of data, a special type of EEPROM is used known as flash memory which is more economical in comparison to EEPROM devices.

History :-

In1983, George Perlegos developed the Intel 2816 at Intel, which was based on earlier technology EEPROM, but now we are using a thin gate oxide layer because the chip could erase its own bits without an ultra violet source. George Perlegos and others left to form Seeq Technology, which used on charge pumps devices to supply the high voltages which is necessary for EEPROMs.

Types of EEPROM :-

There are two types of EEPROM

  1. Parallel Bus
  2. Serial Bus

Parallel Bus :-

Parallel EEPROM devices typically have an address bus wide enough to cover the complete memory and 8-bit data bus. Most devices have chip select (CS) and write protect pins (WR). Some Microcontroller have integrated parallel EEPROM.

Operation of a parallel EEPROM is simple and fast in comparison to serial EEPROM, but these devices are larger due to the higher number of pins (28 pins or more) and have been decreasing in popularity in favor of serial EEPROM or Flash.

Parallel EEPROMs are used in applications such as POS terminals, industrial controllers, LAN adapters, telecommunication switches, cellular phones and modems.


Serial Bus :-

Serial EEPROM works in three modes : OP-Code Phase, Data bus and Address Phase.The OP-Code is usually the first 8-bits input to the serial input pin of the EEPROM device followed by 8 to 24 bits of addressing depending on the depth of the device, then data to be read or written.

Serial EEPROM products are used in many applications to store user reconfigurable data. Common applications are disk drives, modems, cellular phones, VCRs, CD players, hearing aids, PCMCIA cards, cordless phones, laser printers, computers and pagers.

Comparison with EPROM and EEPROM/Flash :-

EPROMs can not be erased electrically, and are programmed through hot carrier injection onto the floating gate. Erase is possible with the help of an Ultra Violet light source, although in practice many EPROMs are encapsulated in plastic that is opaque to Ultra Violet light, and are "one-time programmable".

EEPROM can be programmed and erased electrically using field emission generally known in the industry as "Fowler-Nordheim tunneling".

Mostly NOR Flash memory is a hybrid style-programming is through hot carrier injection and erase with the help of Fowler-Nordheim tunneling.

For more:



Applications of Switching Regulators

Virtually all of today's electronic systems require some form of power conversion. The trend toward lower power, portable equipment has driven the technology and the requirement for converting power efficiently. Switchmode power converters, often referred to simply as "switchers", offer a versatile way of achieving this goal.

Switching regulators are small, flexible, and allow either step-up (boost) or step-down (buck) operation.

When switcher functions are integrated and include a switch which is part of the basic power converter topology, these ICs are called “switching regulators”. When no switches are included in the IC, but the signal for driving an external switch is provided, it is called a “switching regulator controller”. Sometimes - usually for higher power levels - the control is not entirely integrated, but other functions to enhance the flexibility of the IC are included instead. It is important to know what you aregetting in your controller, and to know if your switching regulator is really aregulator or is it just the controller function.

The primary limitations of switching regulators as compared to linear regulators are their output noise, EMI/RFI emissions, and the proper selection of external support components. Although switching regulators do not necessarily require transformers, they do use inductors.

One unique advantage of switching regulators lies in their ability to convert a given supply voltage with a known voltage range to virtually any given desired output voltage, with no “first order” limitations on efficiency. This is true regardless of whether the output voltage is higher or lower than the input voltage - the same or the opposite polarity.

Switchers also offer the advantage that, since they inherently require a magnetic element, it is often a simple matter to “tap” an extra winding onto that element and, often with just a diode and capacitor, generate a reasonably well regulated additional output. If more outputs are needed, more such taps can be used. Since the tap winding requires no electrical connection, it can be isolated from other circuitry, or made to “float” atop other voltages.

Though switchers can be designed to accommodate a range of input/output conditions, it is generally more costly in non-isolated systems to accommodate a requirement for both voltage step-up and step-down. So generally it is preferable to limit the input/output ranges such that one or the other case can exist, but not both, and then a simpler converter design can be chosen.

The concerns of minimizing power dissipation and noise as well as the design complexity and power converter versatility set forth the limitations and challenges for designing switchers, whether with regulators or controllers.

The ideal switching regulator performs a voltage conversion and input/output energy transfer without loss of power by the use of purely reactive components. Although an actual switching regulator does have internal losses, efficiencies can be quite high, generally greater than 80 to 90%. Conservation of energy applies, so the input power equals the output power. This says that in stepdown (buck) designs, the input current is lower than the output current. On the other hand, in step-up (boost) designs, the input current is greater than the output current. Input currents can therefore be quite high in boost applications, and this should be kept in mind, especially when generating high output voltages from batteries.

LEDs: An Introduction

LED was discovered accidentally in the early last century (1907). In early development stage, carborundum crystals were used as light-emitting active material. Nick Holonyak Jr. of the General Electric Company has developed the first practical visible-spectrum LED in 1962.

LED Inside: An LED is a semiconductor device that converts electrical energy directly into light. The most important part of an LED is the semiconductor chip located in the centre of the bulb. On its most basic level, the semiconductor comprises two regions. The p-region contains positive electrical charges, while the n-region contains negative electrical charges.

Construction: One way to construct an LED is to deposit three semiconductor layers on a substrate. Between p-type and n-type semiconductor layers, an active region emits light when an electron and hole recombine. Considering the p-n combination to be a diode,

when the diode is forward biased, holes from the p-type material and

electrons from the n-type material are both driven into the active region, and the light is produced by a solidstate process called ‘electroluminescence.’ In this particular design, the layers

of the LED emit light all the way around the layered structure, and

the LED structure is placed in a tiny reflective cup so that the light from the active layer is reflected toward the desired exit direction.

LED emission and color determination: When sufficient voltage is applied to the chip across the leads of the LED, the current starts to flow. Electrons in the ‘n’ region have sufficient energy to move across the junction into the ‘p’ region. When an electron moves sufficiently close to a positive charge in the ‘p’ region, the two charges re-combine. For each recombination of a negative and a positive charge, a quantum of electromagnetic energy is emitted in the form of a photon. LED emits incoherent LED emits incoherent narrow-spectrum light when electrically biased in the forward direction. This effect is a form of electroluminescence. The color of the emitted light depends on the chemical composition of the semiconducting material used and can be near ultraviolet, visible or infrared. Usually a combination of the chemical elements like gallium, arsenic and phosphorus is used.

LED Terminology:

AlInGaP: The preferred LED chip technology containing aluminium, indium, gallium and phosphorous to produce red, orange and amber

colors

Bin: The systematic division of distribution of performance parameters (flux, colour or CCT, and Vf) into smaller groups that meet aesthetic

requirements of the assembly

Binning: Subdivision of the manufactured distribution into the bin’s common operating parts (colour, flux and forward voltage)

Candela (Cd): The luminous intensity as defined by the international metric standard (SI). The term, retained from the early days of lighting, defines a standard candle of a fixed size and composition as a basis for evaluating the intensity of other light sources

Chromaticity diagram: A horseshoe shaped line connecting the chromaticities of the spectrum of colors

Hue: The situation when the appearance of different colours is similar; e.g., matching blues and pinks

Lightness: A range of grayness between black and white

Chroma: The degree of departure from gray of the same lightness and increasing color; e.g., red, redder and pure red

Color gamut: The range of colors within the chromaticity diagram included when combining different sources

Color spectrum: All wavelengths perceived by the human sight, usually measured in nanometers (nm)

Color temperature: The effect of heating an object until it glows incandescently. The emitted radiation, and apparent color, changes proportional to the temperature. This can be easily envisioned when considering hot metal in a forge that glows red, then orange and then

white as the temperature increases.

Cool white: Light with a correlated color temperature between 5000ºK and 7500ºK, usually perceived as slightly blue

Correlated color temperature: The phrase used to describe the temperature at which a Planckian black body radiator and an illumination source appear to match, usually specified in Kelvin (K)

Color rendering index (CRI): The calculated rendered color of an object. The higher the CRI (based upon a 0-100 scale), the more

natural the colors appear. Natural outdoor light has a CRI of 100. Common lighting sources have a large range of CRI.

Diffuser: An optical element used to mix light rays to improve uniformity

Driver: Electronics used to power illumination sources

Efficacy (luminous efficacy): The light output of a light source divided by the total electrical power input to that source, expressed in lumens per watt (lm/W)

Epoxy: Organic polymer frequently used for a dome or lens, often prone to optical decay over time, resulting in poor lumen maintenance.

High-power light sources contain no epoxy and deliver superior lumen maintenance.

Flux: The sum of all the lumens (lm) emitted by a source

InGaN LED: The preferred LED semiconductor technology containing indium, gallium, and nitrogen to produce green, blue and white colored LED light sources

Kelvin temperature: Term and symbol (K) used to indicate the comparative color appearance of a light source when compared to a

theoretical blackbody. Yellowish incandescent lamps are 3000K. Fluorescent light sources ranges from 3000K to 7500K and higher.

Lumen (lm): The international (SI) unit of luminous flux or quantity of light. It equals the amount of light that is spread over 929 sq.cm

surface by one candlepower when all parts of the surface are exactly 30 cm from the light source. For example, a dinner candle provides

about 12 lumens. A 60W soft white incandescent lamp provides 840 lumens.

Lumen maintenance: The remaining flux percentage at the rated life of a light source

Lumen maintenance curve: A graph comparing the loss of light output against the time the light source is used

Luminaire: A lighting fixture complete with installed lamps and other accessories

Lux (lx): The SI unit of illuminance or luminous flux incident on a unit area—frequently defined as one lumen per square metre (lm/m2)

Metameric: The term used to describe the visual perception phenomenon where spectrally different sources blend into a third chroma. For example, Sir Isaac Newton discovered that people perceive white when observing mixed blue and yellow light.

Nits: Measurement of display screen brightness. 1 nit = 1 Cd/m2. The more the nits, the brighter the picture.

NTSC color space: The range of colors within the CIE chromaticity diagram included when combining phosphor-based RGB sources in CRTs such as televisions and computer monitors.

Planckian black body locus: The line on the CIE chromaticity diagram that describes the color temperature of an object when heated from approximately 1000K to more than 10,000K

Warm white: Light with a correlated color temperature between 3000K and 3500K, usually perceived as slightly yellow.

White point: The coordinated color temperature (CCT) defined by a line perpendicular to the Planckian black body curve and intersecting the measured chromaticity.

Monday, September 22, 2008

Types of Switched Capacitor Voltage Converters



In the above circuit capacitor C1 is charged to the input voltage during first half of switching cycle. In the second half of switching cycle its voltage is inverted and applied to capacitor C2 and load. The output voltage is negative of the input voltage. The Duty cycle – defined as ratio of charging time of capacitor C1 to the entire switching cycle time – is usually 50% because that generally yields the optimal charge transfer efficiency.

During the time period of transient conditions start-up and steady state condition the capacitor C1 has to supply only a small amount of charge to the output capacitor on each switching cycle. The amount of charge transferred depends upon the load current and the switching frequency. Capacitor C1 is also known as charge pump capacitor.
During the time the charge pump capacitor is being charged by the input voltage, the output capacitor C2 must supply the load current. The load current flowing out of C2 causes a droop in the output voltage which corresponds to a component of output voltage ripple. Higher switching frequency allow smaller capacitors for the same amount of droop. Thus the switching frequency impacts the size of the external capacitor required. Higher switching frequency allows the use of smaller capacitor. The Switching frequencies are generally limited to few hundred Khz.

Switched capacitor inverters are low cost, compact and efficiency achieved are greater than 90%. Typical switched capacitor inverters have maximum output current of 150mA maximum.

The Voltage inverters are used in applications where relatively low current negative voltage is required in addition to the primary positive voltage. This may occur in single power supply system where only a few high performance parts require the negative voltage.

Voltage Doubler


Voltage Doubler works similarly to the Inverter, however, the pump capacitor is placed in series with the input voltage during its discharge cycle, thereby, accomplishing the voltage doubling function. In voltage doubler, average input current is approximately twice the average output current.

Voltage doublers are used in low current applications where a voltage greater than the primary supply voltage is required.

Regulated Output Switched Capacitor Voltage Converters

Adding regulation to the switched capacitor voltage converters increases its usefulness in many applications. The most straightforward is to follow the switched capacitor converter with a low dropout linear regulator (LDO). The LDO provides the regulated output and also reduces the ripple of the switched capacitor converter. This approach however, adds the complexity and reduces the available output voltage by the dropout voltage of the LDO.

Another approach to regulation is to vary the duty cycle of the switch control signal with the output of an error amplifier which compares the output voltage with a reference. However, this approach is highly non linear and requires long time constants in order to maintain good regulation control.


For more:

Thursday, August 28, 2008

Paper Transistor

Paper transistor is one of the most important inventions in modern times. It revolutionized electronics and unlocked new and much, much smaller circuits. Portuguese researchers have produced the first discrete device paper based transistors. To be more precise, they have made the first field effect transistors (FET) with a paper interstrate layer. According to the research team, these new transistors offer the same level of performance as ’state-of-the-art oxide based thin film transistors (TFTs) produced on glass or crystalline silicon substrates.
A common paper sheet is used on both sides in the fabrication of the paper transistor. It means paper is used instead of silicon which was invented by a Portuguese team and is manufactured at ambient temperature. This way, the paper acts simultaneously as the electric insulator and as the substrate. Furthermore, electric characterization of devices showed that the hybrid FETs’ performance outpace those of amorphous silicon TFTs, and rival with the actual state of the art of oxide thin film transistors.
There is an increased interest in the use of biopolymers for low-cost electronic applications. Since cellulose is the Earth’s major biopolymer, some international teams have reported using paper as the physical support (substrate) of electronic devices but no one had used paper as an interstrate component of a FET.
The cellulose not only used as substrate but also act as electric insulator by fabricating the device on both sides of the paper. Add to that the paper transistor outperforms the amorphous silicon thin-film transistor used in modern LCD displays and is up to par with the very latest oxide thin-film transistors, which are still a rare sight.
Since paper is a flexible and a biomaterial it would open up new possibilities for bendable displays, bio-labeling, small and cheap displays that could be used for labeling of varies things and more. Our only concern so far is the degradability, i.e. the lifespan of displays made from paper.
These results suggest promising new disposable electronic devices electronics devices , like paper displays, smart labels, smart packaging, bio-applications and RFID tags.

Friday, August 22, 2008

Network Security: A Perspective

The technologies of computer security are based on logic. There is no universal standard notion of what secure behavior is. "Security" is a concept that is unique to each situation. Security is extraneous to the function of a computer application, rather than ancillary to it, thus security necessarily imposes restrictions on the application's behavior.

There are several approaches to security in computing; sometimes a combination of approaches is valid:

  1. Trust all the software to abide by a security policy but the software is not trustworthy (this is computer insecurity).
  2. Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch and path analysis for example).
  3. Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer insecurity).
  4. Trust no software but enforce a security policy with trustworthy mechanisms.

Many systems have unintentionally resulted in the first possibility. Since approach two is expensive and non-deterministic, its use is very limited. Approaches one and three lead to failure. Because approach number four is often based on hardware mechanisms and avoids abstractions and a multiplicity of degrees of freedom, it is more practical. Combinations of approaches two and four are often used in a layered architecture with thin layers of two and thick layers of four.

There are myriad strategies and techniques used to design security systems. There are few, if any, effective strategies to enhance security after design.

One technique enforces the principle of least privilege to great extent, where an entity has only the privileges that are needed for its function. That way even if an attacker gains access to one part of the system, fine-grained security ensures that it is just as difficult for them to access the rest.

Furthermore, by breaking the system up into smaller components, the complexity of individual components is reduced, opening up the possibility of using techniques such as automated theorem proving to prove the correctness of crucial software subsystems. This enables a closed form solution to security that works well when only a single well-characterized property can be isolated as critical, and that property is also assessable to math. Not surprisingly, it is impractical for generalized correctness, which probably cannot even be defined, much less proven. Where formal correctness proofs are not possible, rigorous use of code review and unit testing represent a best-effort approach to make modules secure.

The design should use "defense in depth", where more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds. Defense in depth works when the breaching of one security measure does not provide a platform to facilitate subverting another. Also, the cascading principle acknowledges that several low hurdles do not make a high hurdle. So cascading several weak mechanisms does not provide the safety of a single stronger mechanism.

Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than "fail insecure" (see fail safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure.

In addition, security should not be an all or nothing issue. The designers and operators of systems should assume that security breaches are inevitable. Full audit trails should be kept of system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs are found the "window of vulnerability" is kept as short as possible.

For more: