About circuits and assumptions

Recently I took a look to the excellent book The Art of Electronics (by Horowitz and Hill) and I got kind of frustrated that they avoided “hard” math. I can understand why they decided to do that but at the same time, it’s kind of weird to me to say to people “this involves a lot of math so just consider it’s true”.

I wanted to refresh my understanding of how is complex math applied to circuits and I got frustrated when the book avoided it. I thought it could be a good idea to write it down, because many people can find themselves in the same position I was: I’m an engineer looking for an explanation that helps me develop intuitive understanding of a physical phenomenon that I understood years ago but I need to refresh.

There are many things in this world that are hard but that doesn’t mean we have to accept them the way they are without understanding them.

Circuit analysis is hard. Of course it is. But the amount of math involved is quite limited comparing with other areas. Just let me give this a try and see if we understand it together.

Nobody likes differential equations

Inductors and capacitors have weird relationships between the current and the voltage.

When the resistor has the simple and reasonable Ohm’s law:

V(t)=R·I(t) V(t) = R · I(t)

The inductor has a weird derivative there:

V(t)=L·dI(t)dt V(t) = L · \frac{ \mathrm{d}I(t)}{ \mathrm{d}t}

And the capacitor goes crazier:

I(t)=C·dV(t)dt I(t) = C · \frac{ \mathrm{d}V(t)}{ \mathrm{d}t}

Meaning that if we want to calculate the voltage in a capacitor from the current we need to make an integral.

All these weird formulas are horrible to deal with unless you are a masochist or a mathematician, which are synonyms in many cases. But at the same time are the reason why resistors, capacitors and inductors are the building blocks of electronics.

Both of the weird formulas can be explained in a kind of intuitive way, though:

The inductor makes use of the magnetic field. If the current changes (it’s derivative is not zero), the magnetic field increases and that produces voltage (electromotive force) according to Faraday’s law.

The capacitor is also cool. Capacitor is a simple device: two electrodes with some dielectric in the middle. According to Coulomb’s law, charges in one of the electrodes are going to attract or repel charges on the other until an stable state is set. If the voltage between the electrodes changes, those charges are going to rearrange. During the rearrangement, charges move, so there’s current. If no rearrangement is done, charges are in balance, and there’s no current.

So, inductors need changes in the current to make voltage change, and capacitors need changes in the voltage to make current change.

In both devices there’s a constant (L and C) that, like in the resistors (R) measures the effect of the physical characteristics of the device (the distance between the electrodes of a capacitor, the amount of loops of the inductor…). That’s the value we select on our circuits, because it measures the effect the device is going to have. It’s a really simple way to describe a device, but we don’t need anything else1.

Knowing those formulas and Kirchhoff’s circuit laws2 you are ready to make some circuit analysis or design. Drop some voltage or current generators that are described by a time-dependent mathematical function, put some RLCs on there, solve your differential equations and you are done.

The only problem is you probably forgot how to make a derivative or you may only remember how to make the ones that have no sense for a real voltage or current generator (like those from functions that grow to the infinite). Let’s not talk about how to solve the differential equations neither, because I know you have no idea about how to solve them, even if your teachers at university were really interested on you passing that stupid exam.

But you should remember, at least, there are really good tricks to avoid differential equations because nobody likes them.

Steady state

Most of the tricks are based in the same idea: consider specific cases for our signals.

The process described previously considers any kind of voltage or current, but now we are going to restrict those to consider simpler scenarios that help us reduce the mathematical complexity. The first of them is considering circuit’s state is steady, meaning that we waited until the whole circuit set up and our signals have the same shape over time.

This is extremely important because when we switch on the circuits we are introducing an extreme change (for instance, switching on the power supply from 0V to 5V) that is not supposed to happen often and it’s not related with the functionality of the circuit. That change is going to alter the circuit behavior until the whole circuit gets used to the new state, and then it’s going to work normally.

But, why does that happen?

If you remember the formulas of the beginning, it’s easy to deduce… That extreme change has a very high (infinite, in the ideal case) derivative that is only going to happen once, but its effect is going to affect the circuit during longer time because of the inductors and capacitors it may have.

If we wait long enough, that effect is going to be lost in time, like tears in the rain. That’s the steady state. Once we have it, we can make more assumptions.

DC

Once we reached steady state, signals may or may not change with time. If they don’t, we call it DC. That makes our life much easier because the derivative of a constant is zero.

In DC, resistor’s formula (Ohm’s law) stays the same, but everything is constant now:

V=R·I V = R · I

But inductor’s formula is simpler:

V=0 V = 0

And the capacitor’s formula is simpler too:

I=0 I = 0

This means, inductors are short circuits and capacitors are open circuits.

DC is easy then. Isn’t it?

AC

AC happens when signals change with time, but they are sinusoidal (that look like a sine). Sines have very interesting properties for our purposes of reducing the complexity of the formulas. The derivative of a sine is a delayed sine (a cosine if you wish), and the same goes to the integrals.

This means we can model inductors and capacitors as delayers (huh?): devices whose relation between the current and voltage is a delay and a magnitude adjustment based on their constants (L or C).

Say we have this voltage in an capacitor:

V(t)=A·sin(2πft) V(t) = A · sin( 2 \pi f t )

The current is the derivative of that, multiplied by C:

I(t)=AC2πf·cos(2πft) I(t) = A C 2 \pi f · cos( 2 \pi f t )

Which is, if you want:

I(t)=AC2πf·sin(2πft+π2) I(t) = A C 2 \pi f · sin( 2 \pi f t + \frac{ \pi }{2} )

So there you have the delay (π2\frac{\pi}{2}), and the sine with a different amplitude.

We could do the same we are doing but just in one step by hand, knowing the changes are going to be applied in that way. But that’s boring as shit, and impossible to automate.

Simplifying?

So someone came up with a great idea:

wHy noT USe coMpLex NumBErs? LOL

They didn’t put it that way, but that’s what we got.

If you didn’t cry when reading about sines and cosines there you might or might not be able to understand the relation they have with complex numbers (Euler’s formula):

z=eix=cos(x)+i·sin(x) z = e^{ix} = cos(x) + i · sin(x)

Funny enough, we can put that complex number zz in the unity circle making its imaginary part (projection in the vertical axis) be the sine and the real part (projection in the horizontal axis) be the cosine. There’s a Wikipedia image that explains this beautifully

Cool stuff but not usable yet.

Our sinusoidal signals are changing with time, so we need to tweak this a little. If instead of xx we use 2πft2 \pi f t in the formula, we would have the same sinusoidal signal we had before, or something very similar to it:

z=ei2πft=cos(2πft)+i·sin(2πft) z = e^{i 2 \pi f t} = cos(2 \pi f t) + \mathrm{i} · sin(2 \pi f t)

Now our imaginary number is turning counterclockwise in the unity circle, making ff rounds per unit of time. Which means it has an angular speed, ω\omega, of 2πf2 \pi f (we’ll replace that in the formula later to make it simpler).

The next trick is to use that kind of imaginary numbers instead of sines or cosines as the representation of our signals in a circuit, but for that we have to agree that we are going to use those, but we are only dealing with the real part of them, because the imaginary part doesn’t exist in real life.

In the previous example in the capacitor, we said this was its voltage:

V(t)=A·sin(2πft) V(t) = A · sin( 2 \pi f t )

If we use the complex representation and remember we just need the real part, it must be something like:

V(t)=A·sin(2πft)=A·cos(2πftπ2) V(t) = A · sin( 2 \pi f t ) = A · cos( 2 \pi f t - \frac{\pi}{2}) V(t)=A·(ei(2πftπ2)) V(t) = A · \mscrR \left( e^{\mathrm{i} (2 \pi f t - \frac{\pi}{2})} \right)

Let’s use ω\omega as we said before, to make all this simpler, and we’ll use jj instead of ii for the imaginary part, to avoid mixing it with the current:

V(t)=A·(ej(ωtπ2)) V(t) = A · \mscrR \left( e^{\mathrm{j} (\omega t - \frac{\pi}{2})} \right)

Hey! Don’t complain that much, we didn’t change anything…

Let’s calculate the current then:

I(t)=C·dV(t)dt I(t) = C · \frac{ \mathrm{d}V(t)}{ \mathrm{d}t}

I(t)=C·d(A·(ej(ωtπ2)))dt I(t) = C · \frac{ \mathrm{d} \left( A · \mscrR \left( e^{ \mathrm{j} (\omega t - \frac{\pi}{2})} \right) \right)}{ \mathrm{d}t}

The derivative of the real part of something is the real part of it’s derivative, right? Can we agree with that?3.

I(t)=CA·(d(ej(ωtπ2))dt) I(t) = C A · \mscrR \left( \frac{ \mathrm{d} \left( e^{\mathrm{j} (\omega t - \frac{\pi}{2})} \right) }{ \mathrm{d}t} \right) I(t)=CA·(jω·ej(ωtπ2)) I(t) = C A · \mscrR \left( \mathrm{j} \omega · e^{\mathrm{j} (\omega t - \frac{\pi}{2})} \right)

Let’s apply Euler’s formula now to split that exponential:

I(t)=C·A·ω·(j(cos(ωtπ2)+j·sin(ωtπ2))) I(t) = C · A · \omega · \mscrR \left( \mathrm{j} \left( cos(\omega t - \frac{\pi}{2}) + \mathrm{j} · sin(\omega t - \frac{\pi}{2}) \right) \right)

And let’s work on it a little to take it’s real part:

I(t)=CAω·(j(cos(ωtπ2)+j·sin(ωtπ2))) I(t) = C A \omega · \mscrR \left( \mathrm{j} \left( cos(\omega t - \frac{\pi}{2}) + \mathrm{j} · sin(\omega t - \frac{\pi}{2}) \right) \right) I(t)=CAω·(j·cos(ωtπ2)sin(ωtπ2)) I(t) = C A \omega · \mscrR \left( \mathrm{j} · cos(\omega t - \frac{\pi}{2}) - sin(\omega t - \frac{\pi}{2}) \right) I(t)=CAω·(sin(ωtπ2)) I(t) = C A \omega · (- sin(\omega t - \frac{\pi}{2}) ) I(t)=CAω·sin(ωtπ2+π) I(t) = C A \omega · sin(\omega t - \frac{\pi}{2} + \pi)

And there it goes, it’s the same we got before:

I(t)=CAω·sin(ωt+π2) I(t) = C A \omega · sin(\omega t + \frac{\pi}{2})

Amazing, we did the same thing twice but this second time it’s much more complex than what it was before for no real reason. Good job.

We can start watching some patterns here and there though.

If we stop the process a little bit before arriving to the end and we keep current as a complex function, making the promise that we are going to take the real part later, we can unleash the power of the complex numbers:

𝕀(t)=C·Ad(ej(ωtπ2))dt \mathbb{I}(t) = C · A \frac{ \mathrm{d} \left( e^{\mathrm{j} (\omega t - \frac{\pi}{2})} \right) }{ \mathrm{d}t} 𝕀(t)=C·A·jω·ej(ωtπ2) \mathbb{I}(t) = C · A · \mathrm{j} \omega · e^{\mathrm{j} (\omega t - \frac{\pi}{2})}

Thinking about Euler’s formula, in a very specific case where the cosine is zero and the sine is one:

j=0+j·1=ejπ2 \mathrm{j} = 0 + \mathrm{j} · 1 = e^\frac{\mathrm{j} \pi}{2}

So:

𝕀(t)=C·A·ω·ejπ2·ej(ωtπ2) \mathbb{I}(t) = C ·A · \omega · e^\frac{\mathrm{j} \pi}{2} · e^{\mathrm{j} (\omega t - \frac{\pi}{2})}

Putting it together with the other exponential:

𝕀(t)=C·A·ω·ej(ωtπ2)+jπ2 \mathbb{I}(t) = C ·A · \omega · e^{\mathrm{j} (\omega t - \frac{\pi}{2}) + \frac{\mathrm{j} \pi}{2} } 𝕀(t)=C·A·ω·ejωt \mathbb{I}(t) = C ·A · \omega · e^{\mathrm{j} \omega t}

And now taking the real part as we promised we get to the same place:

I(t)=C·A·ω·(ejωt) I(t) = C ·A · \omega · \mscrR{\left( e^{\mathrm{j} \omega t} \right)} I(t)=C·A·ω·cos(ωt) I(t) = C ·A · \omega · cos( \omega t) I(t)=C·A·ω·sin(ωt+π2) I(t) = C ·A · \omega · sin( \omega t + \frac{\pi}{2})

Something interesting happened there. That derivation in the complex world can be substituted by a multiplication by ω·ejπ2\omega · e^{\mathrm{j} \frac{\pi}{2}}, or jω\mathrm{j}\omega if you prefer. This has many interesting implications. First, the derivation is a π2\frac{\pi}{2} delay on the signal and, second, derivation can be modeled by a multiplication of the complex number jω\mathrm{j} \omega.

Remember this is only true in our case where we know our derivation variable, tt, appears with the constant angular speed, ω\omega, all the time. Which is true, because we only introduced a sinusoidal signal to the circuit, that operates at a specific (and constant) frequency, so the angular speed is constant too.

Those examples show that, if we wanted to simplify all our stuff (and we do), we could use complex numbers to get rid of the derivatives and then come back to real world at the end of the calculus. Whenever we see a derivative we can just put jωj\omega instead.

Let’s do it in the voltage-current relationship formulas.

But before we do that, we have to remember we are working with “fake signals”. “Fake signals” are complex, so we are going to mark them with the thick name like 𝕀(t)\mathbb{I}(t) or 𝕍(t)\mathbb{V}(t) to remind us to take only the real part when we finish our calculus.

Resistor has the same formula:

𝕍(t)=R·𝕀(t) \mathbb{V}(t) = R · \mathbb{I}(t)

The inductor had a derivative, we can substitute with the multiplication:

𝕍(t)=L·d𝕀(t)dt=L·jω·𝕀(t) \mathbb{V}(t) = L · \frac{ \mathrm{d}\mathbb{I}(t)}{ \mathrm{d}t} = L · \mathrm{j} \omega · \mathbb{I}(t)

And the capacitor:

𝕀(t)=C·d𝕍(t)dt=C·jω·𝕍(t) \mathbb{I}(t) = C · \frac{ \mathrm{d}\mathbb{V}(t)}{ \mathrm{d}t} = C · \mathrm{j} \omega · \mathbb{V}(t)

I tricked you before, because I didn’t show the voltage depending on the current. Now it’s easier to rearrange, we don’t need any integral (see? everything starts to make sense now):

𝕍(t)=1C·jω·𝕀(t)=jC·ω·𝕀(t) \mathbb{V}(t) = \frac{ 1 }{ C · \mathrm{j} \omega } · \mathbb{I}(t) = \frac{ - \mathrm{j} }{ C · \omega } · \mathbb{I}(t)

Now, all those formulas have something in common! They are kind of an extension of Ohm’s law, if RR was a complex number. Something like so:

𝕍(t)=·𝕀(t) \mathbb{V}(t) = \mathbb{Z} · \mathbb{I}(t)

This new \mathbb{Z} is a complex number, that can be real (when the device is a resistor), imaginary positive (like in a inductor) or imaginary negative (capacitor).

Now we avoided all those weird formulas, it looks like we spent so much time on this that the goal must be something else, something better.

It certainly is better. If you think about the way resistors are combined in a resistor-only circuit (parallel and series), you realize they are only added, multiplied and divided. If we do that to complex numbers they result in one complex number, with a real and an imaginary part. This is fine, because it means we can combine these new “fake resistors” to make them work with classic circuit analysis based on Kirchhoff and Ohm.

Those fake resistors do actually have a name, they are called impedance.

Phasors

But there’s something else to solve yet: fake signals. Those complex signals are going to appear here and there, and they can be represented as a complex formula as we did, but we can stretch the idea much further, and we will.

Let’s say we have the voltage we had in the examples:

V(t)=A·sin(2πft) V(t) = A · sin( 2 \pi f t )

The fake signal associated to that is the one we get before, but keeping the imaginary part:

𝕍(t)=A·ei(2πftπ2) \mathbb{V}(t) = A · e^{\mathrm{i} (2 \pi f t - \frac{\pi}{2})}

If we work on it:

𝕍(t)=A·ej(ωtπ2) \mathbb{V}(t) = A · e^{\mathrm{j} (\omega t - \frac{\pi}{2})} 𝕍(t)=A·ejπ2ejωt \mathbb{V}(t) = A · e^{\mathrm{j} \frac{-\pi}{2}} e^{\mathrm{j}\omega t}

We can separate that in three pieces:

The signal’s amplitude is a simple concept: sines’ and cosines’ amplitude is 1. That means they go from -1 to 1. That extra constant there is making them stretch or compress in the Y axis. Easy.

The delay is something I called that way in the past too, but that’s not necessarily its real name. I called it delay because it looks like a delay. A sine is a delayed cosine, it’s kind of a cosine that happens later (or before if you want, because they are periodic). That delay can be modeled by the constant added in the body of the cosines and sines. Think about this cosine: cos(x)cos(x), if you compare it with this one: cos(x+π)cos(x + \pi), the difference is there’s a half turn (π\pi radian, or 180 degrees if you want) delay between both of them.

And the shape is how the signal looks like: just a cosine (or a sine) in this case.

But we didn’t make a clear relationship of those ingredients with how an actual signal looks like, so let’s take a simple signal here and make a comparison. If we call that delay ϕ\phi. We can have this kind of signal here:

V(t)=A·cos(ωt+ϕ) V(t) = A · cos(\omega t + \phi)

We can obtain its complex representation from the process we just did, and we’ll obtain this:

𝕍(t)=A·ejϕejωt \mathbb{V}(t) = A · e^{\mathrm{j} \phi} e^{\mathrm{j}\omega t}

If we just get the real part of that, we all can agree that we’ll get the cosine delayed by ϕ\phi with AA amplitude, right?

Of all those three ingredients, we can separate them following a really simple criteria: those that have some information and those that don’t. If you think about it, we are in AC, so all our signals are sinusoidal and have a fixed frequency so the shape part is not giving us any information.

If we put the amplitude and the delay together we have a complex number that has a magnitude AA and an angle ϕ\phi that says all we need to know to rebuild the signal.

Graphically speaking the amplitude represents the length of the stick you put in the unit circle, and the delay (or the angle or whatever you call it) represents the initial angle in the unit circle you have to set before you make the stick rotate in counterclockwise direction at your constant ω\omega angular speed.

Knowing this, we can just ignore the shape of the signal and consider it redundant and use only the other, the meaningful, part. That representation is called phasor.

Our new phasorical voltage now looks like this:

𝕍=A·ejϕ \mathbb{V} = A · e^{\mathrm{j} \phi}

We don’t use tt now, because it’s implied in the phasor representation, and we still use thick letters to say this is a phasor and has no meaning in our world.

That complex number can be represented in many ways too, like in a real vs imaginary way or this fantastic representation I love:

𝕍=Aϕ \mathbb{V} = A \angle{\phi}

But phasors are not only a simple representation of our cosines, they are something else.

If you take back what we did with inductors and capacitors, we said they were changes in the angle and the amplitude of the signals…

So they are just changes in the phasor part, not in the shape part. Interestingly enough, those changes apply perfectly if we use the extended Ohm’s law, on phasors directly:

𝕍=·𝕀 \mathbb{V} = \mathbb{Z} · \mathbb{I}

Let’s redo the example we used before but faster.

Our voltage in a capacitor was this:

V(t)=A·sin(2πft) V(t) = A · sin( 2 \pi f t )

And that can be tweaked until we obtain it’s phasor representation as we already did (convert to cosine, then to complex and all that):

𝕍(t)=A·ejπ2ejωt \mathbb{V}(t) = A · e^{\mathrm{j} \frac{-\pi}{2}} e^{\mathrm{j}\omega t} 𝕍=Aπ2 \mathbb{V}= A \angle \frac{-\pi}{2}

Now we want the current in the capacitor so:

𝕍=·𝕀 \mathbb{V} = \mathbb{Z} · \mathbb{I} Aπ2=jωC·𝕀 A \angle \frac{-\pi}{2} = \frac{-\mathrm{j}}{\omega C} · \mathbb{I}

Angle representation here is not very useful, but the real vs imaginary looks better4. Euler’s formula to the rescue again.

Aejπ2=jωC·𝕀 A e^{\mathrm{j}\frac{-\pi}{2}} = \frac{-\mathrm{j}}{\omega C} · \mathbb{I}

A(cos(π2)+j·sin(π2))=jωC·𝕀 A \left(cos(\frac{-\pi}{2}) + \mathrm{j} ·sin(\frac{-\pi}{2}) \right) = \frac{-\mathrm{j}}{\omega C} · \mathbb{I}

jA=jωC·𝕀 - \mathrm{j}A = \frac{-\mathrm{j}}{\omega C} · \mathbb{I}

𝕀=AωC \mathbb{I} = A \omega C

Coming back to the angle representation:

𝕀=AωC0 \mathbb{I} = A \omega C \angle 0

Let’s unpack the phasor now and see what we have:

I(t)=AωC·cos(ωt+0) I(t) = A \omega C · cos(\omega t + 0)

This, if you want, is a delayed sine, which is the result we got before.

I(t)=AωC·sin(ωt+π2) I(t) = A \omega C · sin(\omega t + \frac{\pi}{2})

BOOM! Right in your face.

Wrapping up

Solved then: we can calculate the response of the circuit just using the same concepts we had in DC but using complex numbers if we promise to discard the imaginary part later.

Also, impedances are interesting to understand physical effects: their real part is the part associated with the resistance and the imaginary part is the part associated with the delay. When mixing impedances together, like adding a capacitor and a resistor in series, the resulting complex number has components in both parts. That equivalence is really useful when calculating the power consumption of circuits and many things more.

We can also start to pay attention to the effect frequency has. As capacitors and inductors have no effect on the frequency, the result of our calculus is going to be the same for every frequency possible so we can parametrize it and then try different values. Checking the result of our example we can reach a really interesting level of understanding:

I(t)=AωC·sin(ωt+π2) I(t) = A \omega C · sin(\omega t + \frac{\pi}{2})

If frequency is higher, the amplitude goes up, because there’s a frequency component on it (remember 2πf=ω2\pi f=\omega). If our frequency is zero, the resulting current is zero. Does that make sense?

It does, because in DC, where the signals are constant (frequency is zero), capacitors act as open circuits. There’s no current on them.

The extra thing

The power of what we just did goes further than avoiding some differential equations, which is a good thing by its own. This stuff is much more powerful.

During the XVIII century, a French motherfucker called Jean-Baptiste Joseph Fourier made really interesting contributions to math that changed the way we understand signals nowadays.

The guy said periodic functions can be defined like a combination of harmonic sinusoids (Fourier Series). We could demonstrate it here but this is long enough at the moment so let’s leave the Wikipedia picture that will destroy your mind.

Thinking about this… We could just consider the effect the circuit has in the different frequencies, and that way see how does it respond to a wider set of signals than a simple sine or cosine.

Interestingly enough, what we did is a very specific case of a Fourier Transform, which is a generalization of the Fourier Series, that applied to our formulas helps us avoid the differential equations. We didn’t pay much attention to that part but now you are here you can try to get to that point and make it. It’s surprisingly simple if you know what you are doing.

This is the interest of AC, basically.

Transient analysis: round 2

There’s no need to go on much detail at this level, but transient analysis can also be solved without tackling differential equations. There’s something called Laplace Transform, invented by another french dude called Pierre-Simon Laplace that let’s us avoid the differential equations.

This shit is kind of a wider idea than Fourier Transform but here instead of working with the frequency, which is kind of understandable for standard humanoids, this goes crazier and defines a complex variable ss.

There is, though, an equivalence between both worlds in a way that s=jωs=\mathrm{j}\omega so we could rewrite the formulas of impedances in the resistors, inductors and capacitors using ss instead.

Those new impedances will work with a representation of our signals in the Laplace world. Once we obtained our Laplace representation of the signals we want to calculate, we can just come back to the world of the time tt, doing an inverse Laplace Transform. That’s a good way to avoid differential equations.

The AC analysis happens to be the same but in the Fourier world, the world of the frequency, but as we were working on specifically sinusoidal signals it was easier to manage and we didn’t need to do the jump through the Fourier Transform.

We may solve both things through Fourier and Laplace another day. Maybe not.

More assumptions

There are many assumptions we didn’t even discuss, but can change the way all this thing works.

First, the devices we described are ideal, which is not going to happen in the real world, but it is fair enough if you are an engineer because… Who cares, right? If the thing works more or less our job is done.

On the other hand, if we wanted to be more precise we could model real devices as combinations of those ideal devices. Inductors would have some resistance too, because the process of making them affects their quality as conductors and so on. Not that traumatic, right?

Second, somehow we considered our circuits’ size is zero. Which is clearly false.

Signals need time to travel through cables. It’s really short time to be honest, because their speed is around the speed of light, but that time is important to consider sometimes. When? When the signals change very fast.

Signal’s frequency defines how many oscillations they perform per second. If we know their speed we can also obtain the width of each oscillation. That width is the wavelength (λ\lambda, normally).

If frequency is very low, wavelength is so long that there’s no noticeable difference between two points of the same cable. But if the wavelength is around the physical size of a cable, what happens in one side of the cable didn’t arrive yet to the other side of it. Does this make sense?

When the wavelength is that small, cables start to act weirdly and all the circuit analysis we saw here is not applicable. That opens the door to transmission line theory. Something we are probably going to visit in the near future.

Funny things await us.


  1. In real world we do: tolerance, maximum power… But those are limits real (non-ideal) components have.↩︎

  2. These laws are really simple, they just say two things:

    • The sum of voltages in a loop is zero
    • The sum of currents in a junction is zero
    ↩︎
  3. The real part and the imaginary part are added one to the other, so the result of the derivation of a complex number is the derivate of the real plus the derivate of the imaginary. They are independent, then.↩︎

  4. Angle representation is funny way to put a phasor so it can be seen easily but it’s really hard to operate on them so we have to convert. It’s easy to convert to real vs imaginary numbers and you should be comfortable with that. Calculators do it for you if you want, so this is faster than it looks in the formulas there.↩︎