Time poses a fundamental problem in neuroscience, in part, because at its core the brain is a prediction machine: the brain evolved to allow animals to anticipate, adapt, and prepare for future events. To accomplish this function the brain tells time on scales spanning 12 orders of magnitude. In contrast to most man made clocks that share a very simply underlying principle-counting the "tics" of an oscillator-evolution has devised many different solutions to the problem of telling time. On the scale of milliseconds and seconds experimental and computational evidence suggests that the brain relies on neural dynamics to tell time. For this strategy to work two conditions have to be met: the states of the neural network must evolve in a nonrepeating pattern over the relevant interval, and the sequence of states must be reproducible every time the system is reengaged. Recurrently connected networks of neurons can generate rich dynamics, but a long standing challenge is that the regimes that create computationally powerful dynamics are chaotic-and thus cannot generate reproducible patterns. We have recently demonstrated that by tuning the weights (the coupling coefficients) between the units of artificial neural networks it is possible to generate locally stable trajectories embedded within chaotic attractors. These stable patterns function as "dynamic attractors" and can be used to encode and tell time. They also exhibit a novel feature characteristic of biological systems: the ability to autonomously "return" to the pattern being generated in the face of perturbations.