There are quite a few simulation tools available today. Various methodologies, various levels of GUI usability, but there is one thing in common. They are all really expensive. Ranting aside, it is useful to know how the different simulation techniques work, what are the inherent limitations and what they allow.
In previous posts I reviewed an open source FDTD library called openEMS. It is actually an Equivalent-Circuit FDTD (EC–FDTD) solver, but the theory is very similar. Another prevalent approach for EM simulation is the Finite Integration Tenchnique (FIT). I will briefly review both these methods here, attempting to understand what are the discrepancies between them, if any. Also, we’ll try to see if there is a significant advantage to use either of these two methods.
FDTD
This approach starts with discretizing a certain volumetric domain, , into cubes. These cubes don’t necessarily have to be identical, but the walls of two adjacent cubes should at least overlap.

This method starts from explicitly phrasing Faraday’s equations
and Ampere’s law
Each of these equations can be discretized using 1st differential approximation, given here:
Yielding the equation system for each of these components
Notice the weird “half step” notation used here. Well, this is, as everything in life, a choice. Stepping outside the grid points inherently means interpolation. Interpolation does not mean data is added into the system, it just means that further approximations are being made. But, no method is safe from silly assumptions, as we will all learn again and again. In the illustration below, the positions of these “half steps” is denoted.

There is also one more thing. The electric and magnetic field affect each other. Namely, it is necessary to update this scheme such that the equations for the electric and magnetic field will complement each other. This is known as Yee’s leap-frogging method.
This demands, however, that the time steps and spatial steps will be small enough to support this abomination of an approximation, or, to maintain a valid approximation.
The very nice thing about this method (and, in this writers opinion, the very reason it is more common to find transient method solvers than frequency domain ones) is that this is just about all that is necessary. All that is left is to formulate boundary conditions and start calculating.
Let’s phrase a couple. A perfectly electric conductor boundary condition is definitely in order:
or
.
Why not a current source, as well?
I am not getting into power port formulation, right now, just making a point… However, now a short discussion about validity is in order. A very common condition would be: The maximum propagation distance at a given time step should not exceed the minimal cell size. Wow, that was short, right?
Here, is the maximal propagation speed in the block. The validity condition denoted below was given in [1].
FIT
The main difference here is, that the discussion opens with the integral version of maxwells equations.
The integration of each facet of a cube is denoted by for the electric field and
for the magnetic field. For a single cube, the left-hand side of the integrals, which encapsulates only edge integrations, obtains the following set of equations for Faraday’s law:
where the hat notation is an edge integration, e.g.
and the double hat notation is integration overe an entire fascet. The location of the integrated edges and facets is illustrated below.

Look carefully now. Let’s compare these two equations:
vs.
.
Apart from the deviation of half a delta, these two formulations are pretty much the same. That’s it. Nothing really to add, now you know it. I personally just find it a bit simpler to consider edges in FIT, but these two approaches result in a very similar formulation.
To incorporate the Ampere’s law, there is a need for a similar approach to the half steps, used in the previous FDTD formulation. In the original publication at [2], a secondary grid is defined, named the grid doublet.

No real need to formulate the final matrices. Another difference that is apparent in the formulation in [2] is that the equations are then formulated as a matrix representation, e.g.
.
This sort of formulation allows solving the entire time iteration, for the whole domain, by just multiplying a matrix by a vector. However, this is a bit harder to parallelize than per-element adding\subtracting. Don’t get me wrong, standard parallel computation hardware can speed up matrix multiplication pretty good, but not as efficient. If you are looking for buzzwords, a reduction operator can only be sped up by a factor of , where
is the number of available cores and
is the number of elements to reduce.
The (Very) Finite Difference
So we arrived at a very similar equation system, via two different methods. What is the difference, then? Well, this is the subtle story of observing the fine details.
In FDTD we supposedly assumed that we are looking for the field values in the suggested nodes. However, what we also assumed was a 1st order interpolation of the field values between the nodes. It’s a good assumption as any, probably. That is the very meaning of 1st order derivative approximations.
While formulating FIT we did no such assumption. We basically performed a mean on each facet\edge. I’ll leave it to the reader to decide if this is more or less stable. Since we obtained similar equation system, probably just about the same.
Instinctively, I find it easier to look at edge based discretization, as boundary conditions are defined a lot of time along edges. How do you feel about it?
Epilogue
In general, transient methods are well tested methods that are easy to stabilize. Hopefully, we shall discuss the horrors of finite elements in the future. I have already demonstrated in the past the shortcomings of transient methods, especially in narrow-band structures. There is an additional advantage that does not exist in frequency domain method, which is simultaneous excitation. This method can immediately output the effects of multiple excitations.
Now, apart for understanding that FIT and FDTD are very similar, I want to arrive to a conclusion of when such simulators are relevant. Not whether they are better, but when they are better than other, frequency domain (FD, steady state) based methods. Well, on the evil side, stands the meshing problem. Hexahedral based meshing (cube mesh) are inherently less accurate than unstructured mesh, e.g. tetrahedral mesh, which is prevalent in FD methods.
Apart for how simple it is to formulate, FDTD\FIT can be called “embarrassingly parallel”. Namely, they are very easy to parallelize given a multicore system. More GPU cores? More speed! As I mentioned earlier, GPUs just so happen to be very good at adding and subtracting large vectors\matrices. Multiplying is also available, but this isn’t necessary most of the time. All you annoying engineers now want to cry out “Tensors! Anisotropic materials!”. Well, first of all, shut up. Secondly, you are right. This does require small matrix multiplication, but this is in a scale that is very manageable for a single GPU core.
Hope you enjoyed this first computational electromagnetic method review. This is definitely one of my favorite subjects, and I would like to delve further into this in the future.
[1] Clemens, Markus, and Thomas Weiland. “Discrete electromagnetism with the finite integration technique.” Progress In Electromagnetics Research 32.32 (2001): 65-87.
[2] Sadiku, Matthew NO. Numerical techniques in electromagnetics with MATLAB. CRC press, 2018.