Framework for power control in cellular systems full report
#1

[attachment=3428]

Framework for power control in cellular systems
Abstract:
Efficiently sharing the spectrum resource is of paramount importance in wireless communication systems, in particular in Personal Communications where large numbers of wireless subscribers are to be served. Spectrum resource sharing involves protecting other users from excessive interference as well as making receivers more tolerant to this interference. Transmitter power control techniques fall into the first category. In this paper we describe the power control problem, discuss its major factors, objective criteria, measurable information and algorithm requirements. We attempt to put the problem in a general framework and propose an evolving knowledge-bank to share, study and compare between algorithms.

Presented By:
Mohd Abubakr 3rd year ECE, R.M. Vinay 3rd year ECE,
Gokaraju Rangaraju Inst. of Engg. & Tech.


1. Introduction
The rapid increase for various wireless mobile services everywhere and at any time requires highly sophisticated schemes for allocating and managing radio channel resources. These resources are becoming scarcer as a result of a rather limited allocation of spectrum by interna-tional agreements.
A typical cellular network span is illustrated in figure 1. It consists of a stationary wire line sub network and a wireless sub network. The wire line sub network connects the base stations (i.e., Radio Access Ports “ RAP), which provide the last hop connection to the mobile users. The RAPs are distributed over the service area as to provide pre-specified quality connections to the mobile users over the last wireless hop. The coverage area (i.e., the cell of a RAP), is its neighborhood where the wireless connections can be maintained with the required quality.
A principal method to efficiently allocate and manage channel resources is to share them. The more shared they are the better it is, so long as connection quality is not

Figure 1. A cellular network span.
deteriorated. The major penalty of sharing radio channels is the unwanted (co-channel) interference generated during the sharing process, on which we elaborate below. Different sharing (multiple access) techniques have different impacts. Beside the (co-channel) interference, wireless networks suffer also from cross-channel interference resulting from imperfect technology, doppler shift and multi-paths propagation. The major objective of transmitter power control is to alleviate these co-channel and cross-channel interferences.
The classical sharing method in wireless communication is Frequency Division Multiple Access (FDMA), where the entire available bandwidth is subdivided into non-overlapping (orthogonal) frequency slots. Due to signal orthogonally, FDMA can be implemented within each cell. However, imperfect technology and Doppler frequency shift result in cross-channel interference. With current technology, this cross frequency interference is mainly alleviated by a safe separation of the carrier frequencies, which clearly decreases the spectrum utilization.
Another basic sharing method is Time Division Multiple Access (TDMA). The principle here is to partition each frequency channel into time slots which are assigned to multiple synchronized users in a round-robin fashion. To reduce the number of idle slots, dynamic idle time slot reassignment is often used. Due to signal orthogonality, TDMA can be utilized within each cell. As with FDMA, imperfect time synchronization among distributed users, imperfect technology, non-ideal time slot reassignment and multi-path propagation translate into cross-channel interference. In first-generation GSM [8] and in IS-54, this interference is mostly resolved by inserting guard-times between time slots, which again decreases the spectrum utilization.
A more advanced sharing method is based on wideband spread-spectrum signaling, where all transmitters simultaneously occupy the entire available wideband spectrum. Spread spectrum can either be implemented by transmitting with a very fast chip rate (much larger than the symbol rate), or by using Fast Frequency Hopping (F-FH) over a large number of orthogonal frequencies. From an information theoretical point of view, these are equivalent. If time synchronization can be maintained and the total data rate is not too high relatively to the available bandwidth, Code Division Multiple Access (CDMA) signals can in theory preserve orthogonality. However, in practice, even under orthogonal CDMA, due to imperfect technology, multi-path fading, near-far affect, imperfect synchronization, and bursts of high data rates, CDMA signals translate into co-channel intra-cell and inter-cell interferences. CDMA method is the corner stone of the IS-95 standard, where orthogonal CDMA is used in downlinks and non-orthogonal asynchronous CDMA is used in the uplinks.
A multiple access scheme which can be placed between the narrowband and the wideband techniques is the Slow Frequency Hopping (S-FH), where each user slowly hops between a set of orthogonal frequencies. From the interference perspective, the signal of each user is being interfered by a relatively small number of arbitrary users which hop to the same frequency as he had hopped to. Under F-FH and spread-spectrum signaling however, all users interfere with each other all the time. Thus, under spread-spectrum, both CDMA and good error correction codes are needed, whereas only the latter is required for S-FH. S-FH is being used in many of the current GSM installations.
In wireless networks these basic multiple access schemes are combined with geographical spectrum reuse, where the same radio channel bandwidth is shared among several RAPs which are sufficiently remote from each other. This sharing technique exploits the distance-dependent attenuation, and it is utilized by all mobile wireless service system. Observe, however, that since radio signals are not blocked at cell borders, inter-cell co-channel interference emerges. The interference power depend on the mobile positions with respect to their RAPs, and their positions with respect to each other. Major factors affecting the interference are morphological environments, buildings, mobile speeds and the co-channel frequencies. Due to mobility, these factors vary in time making the interference conditions time-dependent.
Worth noting is that with respect to co-channel interference spread spectrum and FH are quite appealing since signal interference becomes more uniform over time and space, and therefore more predictable. This enables to design systems which are more immune to co-channel in-terference and operate at lower bit error rates. Although the signal interference becomes more uniform, the common assumption that co-channel interference under spread-spectrum signaling is similar to additive white Gaussian noise, has turned out to be false.
More efficient spatial reuse techniques employ directional and smart antennas enabling better frequency sharing. Directional antennas are used to subdivide each cell into several partially overlapping sectors, and to reuse the same channel in different sectors. Unavoidable partial overlapping however, introduces inter-cell co-channel interference. Smart antennas on the other hand, pushes the space sharing to the high end. It could in theory, separate signals from different users who transmit on the same frequency at the same time, and are practically placed on the same line of site from the receiving antenna.
To summarize, economical and competitive mobile wireless services require efficient sharing of channel resources. All sharing methods in practice introduce interference of one sort or another which are proportional to the transmitter powers. Therefore, transmitter power control is a key technique to better balance between the received signal and the interference, which in turn enables more efficient channel resources sharing.
2. The power control problem
In this section we attempt to setup a framework model for power control. We start by identifying dominant factors and channel quality criteria. Then we present several measurable control data, and complete with a discussion on the algorithm requirements.
2.1. Dominant factors
A good power control framework should clearly imbed the dominating factors which cause the interference. Below we classify them into six categories.
The most dominant factor in our mind is the tightly coupled pair of signaling and modulation scheme combined with the multiple access method. Both techniques and their impacts on interference are overviewed and discussed in the Introduction above. To demonstrate their different impacts consider a narrowband TDMA channel (as in the first-generation GSM); and a non-orthogonal asynchronous uplink CDMA channel (as in IS-95). In the former, there is only a very small number of dominating interferes (usually up to three), whereas in the latter each user interferes with each other. A large number of interferers could result in Gaussian statistics, especially when the overall interference power is practically fixed and is accumulated by equal interference powers. Therefore, Gaussian statistics could be manifested in special cases of spread-spectrum CDMA and F-FH, but not in narrowband TDMA.
The S-FH technique introduces another dimension into the interference model. Not as in narrowband TDMA where interference is governed by a slow time varying random process (i.e., high correlation within 100 ms), time correlation in S-FH diminishes much faster. The reason stems from the fact that in TDMA interferers vary relatively slow and fast multi path fading can be averaged out. In S-FH on the other hand, the number of interferes are small (e.g., 10 “ depending on the implementation), and they keep changing every hop. Slow time varying random processes as for TDMA can be approximated by stationary models, whereas fast time varying processes as for S-FH require more complex models.
A second factor is page link orientation that is whether the page link is a downlink or an uplink. Signal propagation in both cases could be quite different, especially in wide areas where RAP antennas are stationary and are placed at elevated locations. User transceivers on the other hand, are usually located amidst buildings and other obstacles which create shadowing and muti path reflections. Moreover, there are many users who move around and transmit on the uplink; and relatively few stationary RAPs which transmit on the downlink. Such asymmetry is translated into quite different co-channel interference phenomena. Another important differentiator is the transmission and processing capability each end can employ. The uprising of smart antennas at the RAPs will sharpen this difference.
A third factor is environment morphology and topology. Radio signals propagation strongly depends on landscape and obstacle substance and shape. Therefore, wide areas, urban areas and indoor areas require different interference models. Wide areas may further vary depending on their morphology. Interference in urban areas is governed by enforced concrete obstacles with sharp corners making the line-of-sight a dominant factor. Indoor interference is mainly governed by slow walking man-made shadowing and plaster walls.
A forth factor is the speed of mobile terminals. Speed along with the carrier frequency governs the time correlation of the multi path fading (also referred to as fast fading). Its statistical distribution is determined by the environment topology, e.g., in wide areas it could be Rayleigh and in urban areas it could be Rician.
A fifth factor is cell hierarchy which has been recently incorporated into cell planning for better spectrum utilization. Hierarchical cells consist of a very large number of pico cells covered by a smaller number of micro cells. The entire service area could further be covered by macro cells. Mobile users may switch from Pico to micro and macro cells depending on their speed, location and connection quality. Their Transmission within each cell type is constrained by different power limits and dynamic ranges. Such hierarchical architecture clearly introduces a highly asymmetric interference pattern which translates itself into yet another statistical model.
The sixth and last factor is the connection type, which is closely related to the multiple access method. By connection type we refer to Continuous Transmission (CTX), Discontinuous Transmission (DTX), packet-switched connection, Variable Bit Rate (VBR) coding, etc. Interference statistics and time dependency under each connection type are different. In DTX for instance, transmission is ceased when idle speech intervals are detected, making interference more random and less correlated in time. A similar effect with an even stronger impact occurs in packet-switched-connections.
2.2. Channel quality and objective criteria
The most common criterion of channel quality is its bit error rate (BER), i.e., the average number of erroneous bits/sec. Note however that the quality of a voice and other real time connections is also affected by the bit jitter (a rather loose term which reflects the bit delay variation).
Another quality criterion which is highly correlated with the BER is the Signal to Interference Ratio (SIR). SIR is defined as the ratio between the desired averaged received signal power and the overall averaged interference power (including the background noise). Since SIR is mathemat-ically more tractable than BER it is used more often in modeling.
BER and SIR are related as follows. The number of erroneous bits in a given connection is a stochastic process in time which is modulated mainly by the instantaneous SIR value. That is, the SIR value at a given time instant determines the distribution of the number of erroneous bits at that time. Each signaling, modulation and coding scheme may have a different distribution function. Some of these distributions could be degenerated, e.g., with ideal coding a bit is either erroneous or valid, depending on the SIR value at the time it is being received.
Since SIR values modulate the BER, SIR-based power control is less tight than BER-based power control. Observe however, that both controls must be used with great care as they are usually a function of average values. To demonstrate one issue consider the following. Practical constraints impose a sliding time window technique to monitor and estimate control data. Selecting the window and sample sizes is instrumental to the estimator reliability. A too large window size may not reflect the actual connection quality, and a too short one may contain too many correlated samples. Clearly, correlated values could generate highly biased estimators.
Note that channel quality is a hard constraint rather than an objective function which need optimization. That is, each connection service and built in decoder requires a pre-specified minimum BER. When such a rate cannot be met, the connection is dropped. The outage probability is the system related objective function which is defined as the long-term proportion of connections which are being dropped.
Observe that the outage probability is also dependent on the cell plan and the traffic load. For instance, with fixed channel allocation it depends also on the cell size, the reuse factor and the traffic load. As a conclusion of this discussion one can set the following as a primer system objective: minimize the reuse factor under heavy traffic load, subject to some given maximum outage probability.
2.3. Measurable information
The information which power control can utilize is quite controversial, and it mainly depends on the existing system architecture to which it applies. From any practical point of view, a power control algorithm must be distributed and use only local information. Also, measurements should be gathered at a rate which suits the power control rate. Clearly, fast power control which combats Rayleigh fading must refresh its information much faster than slow power control which combats shadow fading.
Note that local measurements for a given channel can be drawn at the transmitter and at the receiver. As we control the transmitter power, receiver measurements are delayed by a time period which depends on the propagation delay and the page link capacity. This delay is in effect whether power control commands are issued by the receiver or by the transmitter.
Measurements which can be gathered at the receiver are gains of the channel from the transmitter (using a beacon signal), total received signal power, received data bits, number of erroneous bits and background receiver noise. Measurements which are available at the transmitter are its transmission power, its transmitted data bits and gains of the channel from the receiver (using a beacon signal). Note that in a reciprocal channel, the latter are similar to the gains of the channel from the transmitter.
All measurements are subject to errors which depend on the monitoring device, the sample size and the sample distribution. Sample size further depends on the sliding time window and the device sampling rate. The manner by which measurements are used in the control algorithm may generate additional errors. For instance, measurement aging (i.e., the time elapsed from the moment they are taken and the moment they are used) and estimator distributions. These errors need to be incorporated into the model.
The measurable information and the objective function can formally be represented by a relatively small number of system parameters. Let i denote a transmitter and r(i, t) be its assigned receiver at time t. A fundamental set of parameters are the page link gains {Gi,j (t)}, i.e., the gain of the page link (in power units) between transmitter j and receiver i at time t. The signal received power and the total received power can be represented as a function of these page link gains as follows.
Given that transmitter i is transmitting with power Pi(t) at time t, then its signal is being received by its receiver r(i, t), with power
The total received power at receiver r(i, t) (including the interference power and the background noise) is

where Nr(i,t)(t) is the background noise power at time t, and ij (t) is the correlation between the signals from transmitter j and transmitter i at time t. If the waveforms of transmitters i and j are orthogonal then i,j (t) equals either 1 or 0, depending on whether or not these transmitters are assigned to the same channel. If they are not orthogonal, then i,j (t) is a random variable taking values between 0 and 1.
The received powers govern several measurements which can be used to control the powers. Below we relate between these measurements, the objective function and the power control. For the sake of simplicity consider the case where receiver assignments are fixed in time (i.e., r(i, t) = r(i)), and the erroneous bits, the received power and the signal power are measured by the following averages over a time window of length T :
¢
where P is a known beacon signal power.
Due to measurement errors the actual measured values at time t are BEAi(t)+Ei 1(t), Ri(t)+Ei 2(t)and Gr(i),i(t)+ Ei 3(t), where Eij (t) are the respective errors. Measurement errors are difficult to model, they strongly depend on the sampling hardware, the sampled population and the specific environment. A Gaussian error model could be appropriate if the central limit theorem can be applied. Otherwise, worst case bounds are preferred. When the sample size is large and the estimators are unbiased, these errors can be ignored.
Given the measurements, the BER and the SIR at receiver i during the time interval [t - T , T ](BERi(t)and SIRi(t), respectively) can be estimated by BEAi(t) + Ei 1(t) and by
Note that BERi(t)and SIRi(t) are time-dependent average values and serve as natural candidates to control the channel BER and SIR.
Beside to measurement errors, BERi(t)and SIRi(t)are also subject to estimator errors due to finite and biased sampling. These two types of errors can drastically be reduced by faster and better sampling techniques utilizing the strong law of large numbers. Note that with current sampling rates this can indeed be done although other soft inhibitors such as design constraints and power update rate may exist.
A third type of error which cannot be eliminated is the delayed estimator error. It springs from the following distributed mechanism which is used to control the powers. Measurements are taken at the receiver during a time interval [t-T , t]. Then they are being processed and their output is sent to the transmitter. The transmitter updates its power which subsequently affect the SIR and BER at the receiver at time t + t (where t is the delay between the time measurements are taken, and the time they feedback the system). This information aging is manifested in the delayed estimator error on which we elaborate in the next subsection.
2.4. Algorithm requirements
A power control algorithm is required to update transmission powers by either fixed or variable power level increments, and at either fixed or variable time steps. The system structure clearly dictates distributed and asynchronous power updates which is the major requirement. That is, each transmitter in every channel updates its transmission power based on local measurements and a local time clock. Some sort of synchronization though, can be applied in the downlink.
Another requirement is the stability of the control process. A key to stability in a stochastic environment is the error distribution of the updated powers, which is formulated below. To better clarify it note that signal propagation and interference are random processes which vary in time. These processes are being sampled for decoding, error correction and estimation purposes, and the sampled statistics are subject to time-variant statistical errors.
As mentioned in section 2.3, a major source for estimator errors is the information aging. Since the underlying processes have complex correlation functions, precise evaluation of the error distribution is very difficult. Furthermore, the error distribution depends on the power update rate and the modulation and coding schemes. More specifically, it depends on the time between two consecutive power updates and the averaged out attenuation factors. Nevertheless, since power updates are relatively fast, asymptotic evaluation of the errors could still be useful. For the sake of simplicity assume that an arbitrary page link gain G(t) (in power units) is given by
G(t) = L(t) ¢ S(t) ¢ R2(t).
Here, L(t) = D-a(t), where D(t) is the distance between the transmitter and the receiver at time t, and a is a propagation constant; S(t) is the correlated shadow fading time process; and R(t) is the correlated Rayleigh fading time process.
If Rayleigh and shadow fading are both averaged out, then time correlation is dominated by the process {L(t)}, which varies relatively slow. For asymptotically small dt, its time evolution can be represented by
L(t + dt) = L(t) + o(u.dt),
If only Rayleigh fading is averaged out, then time correlation depends also on the shadow fading space correlation. Based on the field measurements reported, it has been shown in, that {S(t)} satisfies the asymptotic evolution

where a = (sS /10) ln(10), sS is the log-standard-deviation of S(t), and Z(t) is an independent Gaussian random variable.
If Rayleigh fading is not averaged out, then assuming Jakeâ„¢s model, {R2(t)} evolution can be asymptotically approximated by
Here, (dt) is the zero order Bessel function of the first kind evaluated at 2pvf dt/C, where f is the frequency, C is the speed of light and v is the velocity. The random variables {R2(t)} are independent and exponentially distributed with mean 1. The parameters a(dt)and b(dt)are constants depending on dt, the carrier frequency and the mobile speed.
E.g., for 900 MHz and 90 Km/h, a(10 ms) = 2, b(10 ms) = 1.25 and a(t) = 1.1, b(t) = 1.25, for t = 0.1, 1ms. For 30Km/h, a(10 ms) = 2, b(10 ms) = 1.5, and a(t) = 1.1, b(t) = 1.25, for t = 0.1, 1 ms.
Once the error distribution has been derived, the next issue is how to apply it in the control algorithm. Assume that our objective is to drive all SIR values above a given SIR target. As pointed out above, the channel SIR values are stochastic and therefore would fluctuate under any power control algorithm. Thus, one cannot expect a point wise convergence unless page link gains are frozen in time and no measurement and sampling errors occur. These assumptions form the foundation of the commonly snapshot analysis used in most of the studies.
Observe that snapshot analysis could be useful when some favorite conditions occur. For instance, fast fading is averaged out, mobiles move slowly and power updates are relatively fast. Under such conditions, the controlled powers are drifted to values which are in the vicinity of a temporary fixed point, in a rate which is much faster than the rate page link gains may change. Thus, quasi-convergence and quasi-stability do occur.
However, most often the favorite conditions above do not hold true making snapshot analysis and point wise convergence immaterial. An alternative approach which result in more robust algorithms require either convergence almost everywhere, or convergence in probability, or convergence in distribution.
The first criterion is the strongest one as it implies the others. Stability of an iterative power control algorithm with respect to a special case of convergence in probability and error sources has been recently studied. Under the assumption that estimator errors spring only from additive white Gaussian noise and corruptions of the received information bits are independent and identically distributed, it has been shown there that the iterative power control converges in the mean squared metric.
When the delayed estimator error is taken into account it is quite obvious that neither one of the assumptions above hold. They require the existence of k-order moments and time-independent estimator errors. The delayed estimator error has been treated by taking an technique which combines worst cases and percentile construction. A similar technique is also used to deal with errors resulting from pocketsize and DTX transmission modes.
With this technique, stability is obtained by bounding the SIR fluctuation to a pre-specified range with a pre-specified probability. The probability is interpreted as the error correction capability. The specific control function is derived by incorporating the estimator distribution percentiles into the power control algorithm.
A last requirement we relate to is the ease of implementation and robustness. A simple example which demonstrates this issue is fixed versus variable power increments.
Whereas theoretical papers consider both cases, practical implementations use only fixed increments as they are easier to implement and more robust.
3. A solution by a public knowledge bank
The discussion in section 2 demonstrates part of the complexity involved in radio resource allocation, especially when practical aspects are taken into account. Power control is only one out of many resource allocation problems encountered by designers and researchers. From the system point of view, any specific resource allocation algorithm must be evaluated with respect to the entire system. Considering the limitation of analytical methods, only system simulation is a practical alternative.
Indeed, most researchers, manufactures and standardization committees are evaluating their designs and algorithms by simulation. Simulation, although tedious and expensive, is straightforward. Thus, the most common solution methodology today comprises the following steps:
1. Problem description.
2. Model definition.
3. Algorithm derivation and evaluation by exact and approximated mathematical methods
4. Tailored system simulation.
As most of researchers are aware of, this solution paradigm has severe built in flaws which make the results quite non scientific.
Reports which are generated based on this solution paradigm cannot be sincerely reviewed and compared.
The main flaw in this process is the absence of a reference system by which results can be evaluated and compared. Computer industry solves this problem by a benchmark.
A second flaw is the process by which the problem is described and translated into a model. Most often no specification language is used (even if such exists). Moreover, during the translation into a mathematical model (which can be programmed to a simulator), many explicit and implicit assumptions are being made.
Those assumptions are impossible to track and may generate large deviations among studies of the same problem. A third flaw lies in the body of the simulator implementation. The simulation programs which are used by the researchers are almost never available and therefore cannot be tested. (Note how many bugs are discovered when a new s/w program becomes available for the first time.) Simulation of radio systems are quite complex programs and programming bugs is only part of the problem.
Any experienced scientific programmer is aware of how much simplifying programming shortcuts, stopping rules, round-offs, etc., may impact the results. Simulation of stochastic processes is particularly sensitive to such implementation details. Beside flaws, there is an issue of time and cost to develop the simulation programs. Herein, we propose a new paradigm to deal with modeling and simulation issues “ an open and publicly available Knowledge Bank. Following this paradigm, a cellular radio system consisting of mod-ular building blocks and a bank of algorithms will reside on a server host which can be accessed via the Internet. Remote users will connect to the server from which they will browse the code, run the simulator, add or replace new building blocks and new algorithms, and deposit their own algorithms and results. Any interested party will be able to use this system as a benchmark and as a source for other algorithms. Furthermore, his algorithms (definition and code) will be available to everyone else review “ the system building blocks as well. Such a vast exposure of the system to the research community which allows fast knowledge sharing will accelerate the development of simulators, improve the models and enrich the knowledge bank.
The incredible evolution pace of the WWW demonstrates how much knowledge sharing could be efficient. The big question is whether or not such a knowledge bank can be built. We assert, based on personal experience, that with current Internet technology of Web-Browsers and the object-oriented, platform-independent programming language Java, such a knowledge bank is feasible. Moreover, the programming language makes run-time integration of new algorithms and system building blocks possible.
References
[1] M. Andersin, Z. Rosberg and J. Zander, Gradual removals in cellular PCS with constrained power control and noise, Wireless Networks 2 IEEE(1999)
[2] M. Andersin, Z. Rosberg and J. Zander, Distributed discrete power control in cellular PCS, in: Proc. Workshop on Multiaccess, IEEE(1997)
[3] M. Andersin, Z. Rosberg and J. Zander, Soft admission in cellular PCS with constrained power control and noise, in: Proc. 5th WINLAB Workshop (April 2002).
[4] M. Andersin and Z. Rosberg, Time variant power control in cellular networks, in: Proc. 7th PIMRC Symposium (October 2000).
[5] N. Bambos and G.J. Pottie, On power control in high capacity cellular radio networks, in: Proc. 3rd WINLAB Workshop (October 2004).
[6] ETSI, Group Speciale Mobile (GSM) Recommendation (Sophia Antipolis, France, 1999).
[7] G.J. Foschini, A simple distributed autonomous power control algorithm and its convergence, IEEE Transactions on Vehicular Technology 42(4) (2002).
[8] M. Goldburg and R.H. Roy, The impacts on SDMA (Spatial Division Multiple Access) on PCS system design, in: Proc. ICUPC (October 2004).
[9] S.A. Grandhi, R. Vijayan and D.J. Goodman, A distributed algorithm for power control in cellular radio systems, IEEE Transactions on Vehicular Technology 42(4) (November 1999).
[10] S.A. Grandhi, R. Vijayan, D.J. Goodman and J. Zander, IEEE trans.1998
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: cellular communication full report, seminar report of yii framework**ndicherry groups, full report topics on power systems** management on ladies bags, gains, yii framework seminar report pdf, net framework seminar report, narrowband,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Transparent electronics full report seminar surveyer 8 24,351 04-04-2018, 07:54 AM
Last Post: Kalyani Wadkar
  wireless charging through microwaves full report project report tiger 90 70,539 27-09-2016, 04:16 AM
Last Post: The icon
  Wireless Power Transmission via Solar Power Satellite full report project topics 32 50,213 30-03-2016, 03:27 PM
Last Post: dhanabhagya
  surge current protection using superconductors full report computer science technology 13 26,844 16-03-2016, 12:03 AM
Last Post: computer science crazy
  paper battery full report project report tiger 57 61,680 16-02-2016, 11:42 AM
Last Post: Guest
  IMOD-Interferometric modulator full report seminar presentation 3 11,360 18-07-2015, 10:14 AM
Last Post: [email protected]
  Home appliance & pc Cursor control by mobile phone (DTMF) smart paper boy 3 3,553 21-05-2015, 03:16 PM
Last Post: seminar report asees
  digital jewellery full report project report tiger 36 66,483 27-04-2015, 01:29 PM
Last Post: seminar report asees
  UNINTERRUPTIBLE POWER SUPPLIES ppt seminar surveyer 2 4,533 30-03-2015, 11:29 AM
Last Post: seminar report asees
  LOW POWER VLSI On CMOS full report project report tiger 15 22,190 09-12-2014, 06:31 PM
Last Post: seminar report asees

Forum Jump: