Electrical Seminar Abstract And Report 7
#1
Heart 

Lightning Protection Using LFA-M

Introduction
A new simple, effective and inexpensive method for lightning protection of medium voltage overhead distribution line is using long flashover arresters (LFA). A new long flashover arrester model has been developed. It is designated as LFA-M. It offers great number of technical and economical advantages.

The important feature of this modular long flashover arrester (LFA-M) is that it can be applied for lightning protection of overhead distribution line against both induced overvoltages and direct lightning strokes. The induced over voltages can be counteracted by installing a single arrester on an overhead line support (pole). For the protection of lines against direct lightning strokes, the arresters are connected between the poles and all of the phase conductors in parallel with the insulators.

Lightning is an electrical discharge between cloud and the earth, between clouds or between the charge centers of the same cloud. Lightning is a huge spark and that take place when clouds are charged to at a high potential with respect to earth object (e.g. overhead lines) or neighboring cloud that the dielectric strength of the neighboring medium(air) is destroyed.

TYPES OF LIGHTNING STROKES

There are two main ways in which the lightning may strike the power system . They are
1. Direct stroke
2. Indirect stroke

Direct Stroke
In direct stroke, the lightning discharge is directly from the cloud to the an overhead line. From the line, current path may be over the insulators down to the pole to the ground. The over voltage set up due to the stroke may be large enough to flashover this path directly to the ground. The direct stroke can be of two types

1. stroke A
2. stroke B

In stroke A, the lightning discharge is from the cloud to the subject equipment(e.g. overhead lines). The cloud will induce a charge of opposite sign on the tall object. When the potential between the cloud and line exceed the breakdown value of air, the lightning discharge occurs between the cloud and the line.

In stroke B the lightning discharge occurs on the overhead line as the result of stroke A between the clouds. There are three clouds P,Q and R having positive, negative and positive charge respectively. Charge on the cloud Q is bound by cloud R.If the cloud P shift too nearer to cloud Q,Then lightning discharge will occur between them and charges on both these cloud disappear quickly. The result is that charge on cloud R suddenly become free and it then discharges rapidly to earth, ignoring tall object.
Wideband Sigma Delta PLL Modulator
Wideband Sigma Delta PLL Modulator

Introduction
The proliferation of wireless products over past few years has been rapidly increasing. New wireless standards such as GPRS and HSCSD have brought new challenges to wireless transceiver design. One pivotal component of transceiver is frequency synthesizer. Two major requirements in mobile applications are efficient utilization of frequency spectrum by narrowing the channel spacing and fast switching for high data rates. This can be achieved by using fractional- N PLL architecture. They are capable of synthesizing frequencies at channel spacings less than reference frequency. This will increase the reference frequency and also reduces the PLL's lock time.

Fractional N PLL has the disadvantage that it generates high tones at multiple of channel spacing. Using digital sigma delta modulation techniques. we can randomize the frequency division ratio so that quantization noise of the divider can be transferred to high frequencies thereby eliminatory the spurs.

Conventional PLL

The advantages of this conventional PLL modulator is that they offer small frequency resolution, wider tuning bandwidth and fast switching speed. But they have insufficient bandwidth for current wireless standards such as GSM. so that they cannot be used as a closed loop modulator for digital enhanced codeless (DECT) standard. they efficiently filter out quantization noise and reference feed through for sufficiently small loop bandwidth.

Wide Band PLL

For wider loop band width applications bandwidth is increased. but this will results in residual spurs to occur. this due to the fact that the requirement of the quantization noise to be uniformly distributed is violated. since we are using techniques for frequency synthesis the I/P to the modulator is dc I/P which will results in producing tones even when higher order modulators are used. with single bit O/P level of quantization noise is less but with multi bit O/P s quantization noise increases.

So the range of stability of modulator is reduced which will results in reduction of tuning range. More over the hardware complexity of the modulator is higher than Mash modulator. In this feed back feed forward modulator the loop band width was limited to nearly three orders of magnitudes less than the reference frequency. So if it is to be used as a closed loop modulator power dissipation will increase.

So in order to widen the loop band width the close-in-phase noise must be kept within tolerable levels and also the rise of the quantization noise must be limited to meet high frequency offset phase noise requirements. At low frequencies or dc the modulator transfer function has a zero which will results in addition of phase noise. For that the zero is moved away from dc to a frequency equal to some multiple of fractional division ratio. This will introduce a notch at that frequency which will reduce the total quantization noise. Now the quantization noise of modified modulator is 1.7 times and 4.25 times smaller than Mash modulator.

At higher frequencies quantization noise cause distortion in the response. This is because the step size of multi bit modulator is same as single bit modulator. So more phase distortion will be occurring in multi bit PLLs. To reduce quantization noise at high frequencies the step size is reduced by producing functional division ratios. This is achieved by using a phase selection divider instead of control logic in conventional modulator. This divider will produce phase shifts of VCO signal and changes the division ratio by selecting different phases from the VCO. This type of divider will produce quarter division ratios.
Bioinformatics
Bioinformatics

Introduction
TCP/IP

Rapid advances in bioinformatics are providing new hopes to patients of life threatening diseases. Gene chips will be able to screen heart attack and diabetics years before patients develop symptoms. In near future, patients will go to a doctor's clinic with lab- on- a- chip devices. The device will inform the doctor in real time if the patient's ailment will respond to a drug based on his DNA.

These will help doctors diagnose life-threatening illness faster, eliminating expensive, time-consuming ordeals like biopsies and sigmoidoscopies. Gene chips reclassify diseases based on their underlying molecular signals, rather than misleading surface symptoms. The chip would also confirm the patient's identity and even establish paternity.

Bioinformatics is an inter disciplinary research area. It is a fusion of computing, biotechnology and biological sciences. Bioinformatics is poised to one of the most prodigious growth areas in the next to decades. Being the interface between the most rapidly advancing fields of biological and computational sciences, it is immense in scope and vast in applications.

Bioinformatics is the study of biological information as it passes from its storage site in the genome to the various gene products in the cell. Bioinformatics involves the creation and computational technologies for problems in molecular biology. As such ,it deals with methods for storing, retrieving and analyzing biological data, such as nuclei acid (DNA/RNA)and protein sequence, structures, functions, path ways and interactions. The science of Bioinformatics, which is the melding of molecular biology with computer science is essential to the use of genomic information in understanding human diseases and in the identification of new molecular targets of drug discovery.

New discoveries are being made in the field of genomics, an area of study which looks at the DNA sequence of an organism in order to determine which genes code for beneficial traits and which genes are involved in inherited diseases.If you are not tall enough, the stature could be altered accordingly. If you are weak and not strong enough, your physique could be improved. If you think this is the script for a science fiction movie, you are mistaken. It is the future reality.

Evolution Of Bioinformatics

DNA is the genetic material of organism. It contains all the information needed for the development and existence of an organism. The DNA molecule is formed of two long polynucleotide chains which are spirally coiled on each other forming a double helix. Thus it has the form of spirally twisted ladder. DNA is a molecule made from sugar, phosphate and bases.
The bases are guanine (G), cytosine©adenine(A) and thiamine(T).Adenine pairs only with Thiamine and Guanine pairs only with Cytosine. The various combinations of these bases make up with DNA. That is; AAGCT, CCAGT, TACGGT etc. An infinite number of combinations of these bases is possible. And then the gene is a sequence of DNA that represents a fundamental unit of heredity. Human genome consists of approximately 30,000 genes, containing approximately 3 billion base pairs.
Extreme Ultraviolet Lithography
Extreme Ultraviolet Lithography

Introduction
Silicon has been the heart of the world's technology boom for nearly half a century, but microprocessor manufacturers have all but squeezed the life out of it. The current technology used to make microprocessors will begin to reach its limit around 2005. At that time, chipmakers will have to look to other technologies to cram more transistors onto silicon to create more powerful chips. Many are already looking at extreme-ultraviolet lithography (EUVL) as a way to extend the life of silicon at least until the end of the decade.

Potential successors to optical projection lithography are being aggressively developed. These are known as "Next-Generation Lithographies" (NGL's). EUV lithography (EUVL) is one of the leading NGL technologies; others include x-ray lithography, ion-beam projection lithography, and electron-beam projection lithography. Using extreme-ultraviolet (EUV) light to carve transistors in silicon wafers will lead to microprocessors that are up to 100 times faster than today's most powerful chips, and to memory chips with similar increases in storage capacity.

Extreme ultraviolet lithography (EUVL) is an advanced technology for making microprocessors a hundred times more powerful than those made today.

EUVL is one technology vying to replace the optical lithography used to make today's microcircuits. It works by burning intense beams of ultraviolet light that are reflected from a circuit design pattern into a silicon wafer. EUVL is similar to optical lithography in which light is refracted through camera lenses onto the wafer. However, extreme ultraviolet light, operating at a different wavelength, has different properties and must be reflected from mirrors rather than refracted through lenses. The challenge is to build mirrors perfect enough to reflect the light with sufficient precision

EUV RADIATION

We know that Ultraviolet radiations are very shortwave (very low wavelength) with high energy. If we further reduce the wavelength it becomes Extreme Ultraviolet radiation. Current lithography techniques have been pushed just about as far as they can go. They use light in the deep ultraviolet range- at about 248-nanometer wavelengths-to print 150- to 120-nanometer-size features on a chip. (A nanometer is a billionth of a meter.) In the next half dozen years, manufacturers plan to make chips with features measuring from 100 to 70 nanometers, using deep ultraviolet light of 193- and 157-nanometer wavelengths. Beyond that point, smaller features require wavelengths in the extreme ultraviolet (EUV) range. Light at these wavelengths is absorbed instead of transmitted by conventional lenses
Lithography
Lithography

Introduction
Computers have become much more compact and increasingly powerful largely because of lithography, a basically photographic process that allows more and more features to be crammed onto a computer chip.

Lithography is akin to photography in that it uses light to transfer images onto a substrate. Light is directed onto a mask-a sort of stencil of an integrated circuit pattern-and the image of that pattern is then projected onto a semiconductor wafer covered with light-sensitive photoresist. Creating circuits with smaller and smaller features has required using shorter and shorter wavelengths of light.
Animatronics

Molecular Electronics

Cellonics Technology

Cellular Digital Packet Data

CT Scanning

Continuously variable transmission (CVT)

Continuously variable transmission (CVT)

Introduction
After more than a century of research and development, the internal combustion (IC) engine is nearing both perfection and obsolescence: engineers continue to explore the outer limits of IC efficiency and performance, but advancements in fuel economy and emissions have effectively stalled. While many IC vehicles meet Low Emissions Vehicle standards, these will give way to new, stricter government regulations in the very near future. With limited room for improvement, automobile manufacturers have begun full-scale development of alternative power vehicles. Still, manufacturers are loath to scrap a century of development and billions or possibly even trillions of dollars in IC infrastructure, especially for technologies with no history of commercial success. Thus, the ideal interim solution is to further optimize the overall efficiency of IC vehicles.

One potential solution to this fuel economy dilemma is the continuously variable transmission (CVT), an old idea that has only recently become a bastion of hope to automakers. CVTs could potentially allow IC vehicles to meet the first wave of new fuel regulations while development of hybrid electric and fuel cell vehicles continues. Rather than selecting one of four or five gears, a CVT constantly changes its gear ratio to optimize engine efficiency with a perfectly smooth torque-speed curve. This improves both gas mileage and acceleration compared to traditional transmissions. The fundamental theory behind CVTs has undeniable potential, but lax fuel regulations and booming sales in recent years have given manufacturers a sense of complacency: if consumers are buying millions of cars with conventional transmissions, why spend billions to develop and manufacture CVTs?

Although CVTs have been used in automobiles for decades, limited torque capabilities and questionable reliability have inhibited their growth. Today, however, ongoing CVT research has led to ever-more robust transmissions, and thus ever-more-diverse automotive applications. As CVT development continues, manufacturing costs will be further reduced and performance will continue to increase, which will in turn increase the demand for further development. This cycle of improvement will ultimately give CVTs a solid foundation in the world's automotive infrastructure.

CVT Theory & Design

Today's automobiles almost exclusively use either a conventional manual or automatic transmission with "multiple planetary gear sets that use integral clutches and bands to achieve discrete gear ratios" . A typical automatic uses four or five such gears, while a manual normally employs five or six. The continuously variable transmission replaces discrete gear ratios with infinitely adjustable gearing through one of several basic CVT designs.
High-availability power systems: Redundancy options
High-availability power systems: Redundancy options

Introduction
In major applications like major computer installations, process control in chemical plants, safety monitors, IC units of hospitals etc., even a temporary power failure may lead to large economic losses. For such critical loads, it is of paramount importance to use UPS systems. But all UPS equipments should be completely de-energized for preventive maintenance at least once per year. This limits the availability of the power system. Now there are new UPS systems in the market to permit concurrent maintenance.

High-Availability Power Systems

The computing industry talks in terms of "Nines" of availability. This refers to the percentage of time in a year that a system is functional and available to do productive work. A system with four "Nines" is 99.99 percent available, meaning that downtime is less than 53 minutes in a standard 365-day year. Five "Nines" (99.999 percent available) equates to less than 5.3 minutes of downtime per year. Six "Nines" (99.9999 percent available) equates to just 32 seconds of downtime per year. These same numbers apply when we speak of availability of conditioned power. The goal is to maximize the availability of conditioned power and minimize exposure to unconditioned utility power. The concept of continuous availability of conditioned power, takes this concept one step further. After all, 100 percent is greater than 99.99999 percent.

The Road To Continuous Availability
We determine availability by studying four key elements:

o Reliability
The individual UPS modules, static transfer switches and other power distribution equipment must be incredibly reliable, as measured by field-documented MTBF (Mean Time Between Failures). In addition, the system elements must be designed and assembled in a way that minimizes the complexity and single points of failure.

o Functionality
The UPS must be able to protect the critical load from the full range of power disturbances, and only a true double-conversion UPS can do this. Some vendors offer single- conversion (line-interactive) three-phase UPS products as a lower cost alternative. However, these alternative UPS's do not protect against all disturbances, including power system short circuits, frequency variations, harmonics and common mode noise. If your critical facility is truly critical, only a true double conversion UPS is suitable.

o Maintainability
The system design must permit concurrent maintenance of all power system components, supporting the load with part of the UPS system while other parts are being serviced. As we shall see, single bus solutions do not completely support concurrent maintenance.

o Fault Tolerance
The system must have fault resiliency to cope with a failure of any power system component without affecting the operation of the critical load equipment. Furthermore, the power distribution system must have fault resiliency to survive the inevitable load faults and human error. The two factors of field-proven critical bus MTBF in excess of one million hours and double-conversion technology ensure reliability and functionality. With reliability and functionality assured, let us look at how different UPS system configurations compare for maintainability and fault tolerance.
IGCT
IGCT

Introduction
Thyristor technology is inherently superior to transistor for blocking voltage values above 2.5kV, plasma distributions equal to those of diodes offering the best trade-off between the on-state and blocking voltages. Until the introduction of newer power switches, the only serious contenders for high-power transportation systems and other applications were the GTO (thyristor), with its cumbersome snubbers, and the IGBT (transistor), with its inherently high losses. Until now, adding the gate turn-off feature has resulted in GTO being constrained by a variety of unsatisfactory compromises. The widely used standard GTO drive technology results in inhomogenous turn-on and turn-off that call for costly dv/dt and di/dt snubber circuits combined with bulky gate drive units.

Rooting from the GTO is one of the newest power switches, the Gate-Commutated Thyristor (GCT). It successfully combines the best of the thyristor and transistor characteristics, while fulfilling the additional requirements of manufacturability and high reliability. The GCT is a semiconductor based on the GTO structure, whose cathode emitter can be shut off "instantaneously", thereby converting the device from a low conduction-drop thyristor to a low switching loss, high dv/dt bipolar transistor at turn- off.

The IGCT (Integrated GCT) is the combination of the GCT device and a low inductance gate unit. This technology extends transistor switching performance to well above the MW range, with 4.5kV devices capable of turning off 4kA, and 6kV devices capable of turning off 3kA without snubbers. The IGCT represents the optimum combination of low loss thyristor technology and snubberles gate effective turn off for demanding medium and high voltage power electronics applications. The thick line shows the variation of the anode voltage during turn-off. The lighter shows the variation of the anode current during turn-off process of IGCT.

GTO and thyristor are four layer (npnp) devices. As such, they have only two stable points their characteristics-'on' and 'off'. Every state in between is unstable and results in current filamentation. The inherent instability is worsened by processing imperfections. This has led to the widely accepted myth that a GTO cannot be operated without a snubber. Essentially, the GTO has to be reduced to a stable pnp device i.e. a transistor, for the few critical microseconds during turn-off.

To stop the cathode (n) from taking part in the process, the bias of the cathode n-p junction has to be reversed before voltage starts to build up at the main junction. This calls for commutation of the full load current from the cathode (n) to the gate (p) within one microsecond. Thanks to a new housing design, 4000A/us can be achieved with a low cost 20V gate unit. Current filamentation is totally suppressed and the turn-off waveforms and safe operating area are identical to those of a transistor.

IGCT technology brings together the power handling device (GCT) and the device control circuitry (freewheeling diode and gate drive) in an integrated package. By offering four levels of component packaging and integration, it permits simultaneous improvement in four interrelated areas; low switching and conduction losses at medium voltage, simplified circuitry for operating the power semiconductor, reduced power system cost, and enhanced reliability and availability. Also, by providing pre- engineered switch modules, IGCT enables medium-voltage equipment designers to develop their products faster.
Reply
#2

Introduction to Bioinformatics


Secondary structure prediction
• History and Context
• Chou and Fasman
• Garnier-Osguthorpe-Robson
• Comparison of Methods
• Newer Approaches


Protein Sequence Analysis

Secondary Structure Prediction
The primary structure of proteins is the sequance of amino acids from which it is
constructed.
There are 20 naturally occuring amino acids. All amino acids have a common chemical
structure: a tetrahedral (sp3) carbon atom (C_alpha) to which four asymmetric groups are
connected: an amino group (NH2), a carboxy group (COOH), a H atom and another chemical
group (denoted by R) which varies from one amino acid to another. Two amino acids connect
via a peptide bond to form a poly-peptide structure - the protein. The peptide bond is
formed by a condensation reaction between the amino and carboxy group, which releases a
water molecule and forms a covalent bond between them. Due to a partial double-bond
character, the peptide unit NH-CO is planar and is always in a trans configuration. The
peptide unit together with the C_alpha are termed back-bone and the residue R is termed
side-chain , and is different from one amino acid to another. As the central C_alpha atom
has four different groups connected to it, it is chiral; all naturally occuring amino
acids are L amino acids.

Protein Sequence Analysis

Secondary Structure Prediction
Each amino acid contains an "amine" group
(NH3) and a "carboxy" group (COOH) (shown
in black in the diagram).
The amino acids vary in their side chains
(indicated in blue in the diagram).
The eight amino acids in the orange area
are nonpolar and hydrophobic.
The other amino acids are polar and
hydrophilic ("water loving").
The two amino acids in the magenta box are
acidic ("carboxy" group in the side chain).
The three amino acids in the light blue box
are basic ("amine" group in the side
chain).

For more information about this article,please follow the link:

http://googleurl?sa=t&source=web&cd=1&ve...truc.a.pdf&ei=TSu5TNqUBc-TjAfglfmpDg&usg=AFQjCNGixPIzExKkxeA5NYHaZYj4nZ6qrw
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: mozilla skins abstract, abstract glowing, laurel abstract, what abstract in, bubblesensing abstract, abstract for soundbarriers, abstract expressionism spirituality,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Electrical Engineering Projects? shakir_ali 5 10,110 13-02-2017, 03:12 PM
Last Post: jaseela123d
  Electrical Seminar lists3 computer science crazy 404 194,157 07-02-2017, 04:44 PM
Last Post: shabeer
  seminar on power grid 2 8,763 24-08-2014, 12:42 AM
Last Post: Guest
Bug Electrical impedance tomography seminar projects crazy 3 12,335 27-05-2013, 11:28 PM
Last Post: Guest
  SEMINAR TOPIC ON POWER PLANT ENGINEERING computer girl 1 10,581 22-04-2013, 08:38 AM
Last Post: [email protected]
  Seminar on Save Energy computer girl 2 10,353 03-04-2013, 10:52 AM
Last Post: computer topic
  Comparison Of Different Electrical Machines For Hybrid Electrical Vehicles Wifi 2 10,223 26-11-2012, 08:33 PM
Last Post: Guest
  ELECTRICAL HEATING METHODS seminar surveyer 2 10,682 26-11-2012, 03:43 PM
Last Post: seminar details
  ELECTRICAL MACHINES seminar surveyer 1 10,013 12-11-2012, 01:27 PM
Last Post: seminar details
  Axial-Field Electrical Machines computer science crazy 9 16,017 12-11-2012, 01:27 PM
Last Post: seminar details

Forum Jump: