What does the “yield” keyword do?
What is the use of the yield
keyword in Python? What does it do?
For example, I'm trying to understand this code1:
And this is the caller:
What happens when the method _get_child_candidates
is called?
Is a list returned? A single element? Is it called again? When will subsequent calls stop?
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: Module mspace.
To understand what yield
does, you must understand what generators are. And before generators come iterables.
When you create a list, you can read its items one by one. Reading its items one by one is called iteration:
mylist
is an iterable. When you use a list comprehension, you create a list, and so an iterable:
Everything you can use "for... in...
" on is an iterable; lists
, strings
, files...
These iterables are handy because you can read them as much as you wish, but you store all the values in memory and this is not always what you want when you have a lot of values.
Generators are iterators, a kind of iterable you can only iterate over once. Generators do not store all the values in memory, they generate the values on the fly:
It is just the same except you used ()
instead of . BUT, you cannot perform
for i in mygenerator
a second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end calculating 4, one by one.
yield
is a keyword that is used like return
, except the function will return a generator.
Here it's a useless example, but it's handy when you know your function will return a huge set of values that you will only need to read once.
To master yield
, you must understand that when you call the function, the code you have written in the function body does not run. The function only returns the generator object, this is a bit tricky :-)
Then, your code will be run each time the for
uses the generator.
Now the hard part:
The first time the for
calls the generator object created from your function, it will run the code in your function from the beginning until it hits yield
, then it'll return the first value of the loop. Then, each other call will run the loop you have written in the function one more time, and return the next value, until there is no value to return.
The generator is considered empty once the function runs, but does not hit yield
anymore. It can be because the loop had come to an end, or because you do not satisfy an "if/else"
anymore.
Generator:
Caller:
This code contains several smart parts:
The loop iterates on a list, but the list expands while the loop is being iterated :-) It's a concise way to go through all these nested data even if it's a bit dangerous since you can end up with an infinite loop. In this case, candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
exhausts all the values of the generator, but while
keeps creating new generator objects which will produce different values from the previous ones since it's not applied on the same node.
The extend()
method is a list object method that expects an iterable and adds its values to the list.
Usually we pass a list to it:
But in your code it gets a generator, which is good because:
And it works because Python does not care if the argument of a method is a list or not. Python expects iterables so it will work with strings, lists, tuples and generators! This is called duck typing and is one of the reason why Python is so cool. But this is another story, for another question...
You can stop here, or read a little bit to see an advanced use of a generator:
Note: For Python 3, useprint(corner_street_atm.__next__())
or print(next(corner_street_atm))
It can be useful for various things like controlling access to a resource.
The itertools module contains special functions to manipulate iterables. Ever wish to duplicate a generator?
Chain two generators? Group values in a nested list with a one-liner? Map / Zip
without creating another list?
Then just import itertools
.
An example? Let's see the possible orders of arrival for a four-horse race:
Iteration is a process implying iterables (implementing the __iter__()
method) and iterators (implementing the __next__()
method).
Iterables are any objects you can get an iterator from. Iterators are objects that let you iterate on iterables.
There is more about it in this article about how for
loops work.
When you see a function with yield
statements, apply this easy trick to understand what will happen:
This trick may give you an idea of the logic behind the function, but what actually happens with yield
is significantly different that what happens in the list based approach. In many cases the yield approach will be a lot more memory efficient and faster too. In other cases this trick will get you stuck in an infinite loop, even though the original function works just fine. Read on to learn more...
First, the iterator protocol - when you write
Python performs the following two steps:
Gets an iterator for mylist
:
Call iter(mylist)
-> this returns an object with a next()
method (or __next__()
in Python 3).
[This is the step most people forget to tell you about]
Uses the iterator to loop over items:
Keep calling the next()
method on the iterator returned from step 1. The return value from next()
is assigned to x
and the loop body is executed. If an exception StopIteration
is raised from within next()
, it means there are no more values in the iterator and the loop is exited.
The truth is Python performs the above two steps anytime it wants to loop over the contents of an object - so it could be a for loop, but it could also be code like otherlist.extend(mylist)
(where otherlist
is a Python list).
Here mylist
is an iterable because it implements the iterator protocol. In a user defined class, you can implement the __iter__()
method to make instances of your class iterable. This method should return an iterator. An iterator is an object with a next()
method. It is possible to implement both __iter__()
and next()
on the same class, and have __iter__()
return self
. This will work for simple cases, but not when you want two iterators looping over the same object at the same time.
So that's the iterator protocol, many objects implement this protocol:
Note that a for
loop doesn't know what kind of object it's dealing with - it just follows the iterator protocol, and is happy to get item after item as it calls next()
. Built-in lists return their items one by one, dictionaries return the keys one by one, files return the lines one by one, etc. And generators return... well that's where yield
comes in:
Instead of yield
statements, if you had three return
statements in f123()
only the first would get executed, and the function would exit. But f123()
is no ordinary function. When f123()
is called, it does not return any of the values in the yield statements! It returns a generator object. Also, the function does not really exit - it goes into a suspended state. When the for
loop tries to loop over the generator object, the function resumes from its suspended state at the very next line after the yield
it previously returned from, executes the next line of code, in this case a yield
statement, and returns that as the next item. This happens until the function exits, at which point the generator raises StopIteration
, and the loop exits.
So the generator object is sort of like an adapter - at one end it exhibits the iterator protocol, by exposing __iter__()
and next()
methods to keep the for
loop happy. At the other end however, it runs the function just enough to get the next value out of it, and puts it back in suspended mode.
Usually you can write code that doesn't use generators but implements the same logic. One option is to use the temporary list 'trick' I mentioned before. That will not work in all cases, for e.g. if you have infinite loops, or it may make inefficient use of memory when you have a really long list. The other approach is to implement a new iterable class SomethingIter
that keeps state in instance members and performs the next logical step in it's next()
(or __next__()
in Python 3) method. Depending on the logic, the code inside the next()
method may end up looking very complex and be prone to bugs. Here generators provide a clean and easy solution.
Think of it this way:
An iterator is just a fancy sounding term for an object that has a next() method. So a yield-ed function ends up being something like this:
Original version:
This is basically what the Python interpreter does with the above code:
For more insight as to what's happening behind the scenes, the for
loop can be rewritten to this:
Does that make more sense or just confuse you more? :)
I should note that this is an oversimplification for illustrative purposes. :)
The yield
keyword is reduced to two simple facts:
In a nutshell: a generator is a lazy, incrementally-pending list, and yield
statements allow you to use function notation to program the list values the generator should incrementally spit out.
Let's define a function makeRange
that's just like Python's range
. Calling makeRange(n)
RETURNS A GENERATOR:
To force the generator to immediately return its pending values, you can pass it into list()
(just like you could any iterable):
The above example can be thought of as merely creating a list which you append to and return:
There is one major difference, though; see the last section.
An iterable is the last part of a list comprehension, and all generators are iterable, so they're often used like so:
To get a better feel for generators, you can play around with the itertools
module (be sure to use chain.from_iterable
rather than chain
when warranted). For example, you might even use generators to implement infinitely-long lazy lists like itertools.count()
. You could implement your own def enumerate(iterable): zip(count(), iterable)
, or alternatively do so with the yield
keyword in a while-loop.
Please note: generators can actually be used for many more things, such as implementing coroutines or non-deterministic programming or other elegant things. However, the "lazy lists" viewpoint I present here is the most common use you will find.
This is how the "Python iteration protocol" works. That is, what is going on when you do list(makeRange(5))
. This is what I describe earlier as a "lazy, incremental list".
The built-in function next()
just calls the objects .next()
function, which is a part of the "iteration protocol" and is found on all iterators. You can manually use the next()
function (and other parts of the iteration protocol) to implement fancy things, usually at the expense of readability, so try to avoid doing that...
Normally, most people would not care about the following distinctions and probably want to stop reading here.
In Python-speak, an iterable is any object which "understands the concept of a for-loop" like a list [1,2,3]
, and an iterator is a specific instance of the requested for-loop like [1,2,3].__iter__()
. A generator is exactly the same as any iterator, except for the way it was written (with function syntax).
When you request an iterator from a list, it creates a new iterator. However, when you request an iterator from an iterator (which you would rarely do), it just gives you a copy of itself.
Thus, in the unlikely event that you are failing to do something like this...
... then remember that a generator is an iterator; that is, it is one-time-use. If you want to reuse it, you should call myRange(...)
again. If you need to use the result twice, convert the result to a list and store it in a variable x = list(myRange(5))
. Those who absolutely need to clone a generator (for example, who are doing terrifyingly hackish metaprogramming) can use itertools.tee
if absolutely necessary, since the copyable iterator Python PEP standards proposal has been deferred.
What does the yield
keyword do in Python?
yield
is only legal inside of a function definition, and the inclusion of yield
in a function definition makes it return a generator.
The idea for generators comes from other languages (see footnote 1) with varying implementations. In Python's Generators, the execution of the code is frozen at the point of the yield. When the generator is called (methods are discussed below) execution resumes and then freezes at the next yield.
yield
provides an
easy way of implementing the iterator protocol, defined by the following two methods: __iter__
and next
(Python 2) or __next__
(Python 3). Both of those methods
make an object an iterator that you could type-check with the Iterator
Abstract Base
Class from the collections
module.
The generator type is a sub-type of iterator:
And if necessary, we can type-check like this:
A feature of an Iterator
is that once exhausted, you can't reuse or reset it:
You'll have to make another if you want to use its functionality again (see footnote 2):
One can yield data programmatically, for example:
The above simple generator is also equivalent to the below - as of Python 3.3 (and not available in Python 2), you can use yield from
:
However, yield from
also allows for delegation to subgenerators,
which will be explained in the following section on cooperative delegation with sub-coroutines.
yield
forms an expression that allows data to be sent into the generator (see footnote 3)
Here is an example, take note of the received
variable, which will point to the data that is sent to the generator:
First, we must queue up the generator with the builtin function, next
. It will
call the appropriate next
or __next__
method, depending on the version of
Python you are using:
And now we can send data into the generator. (Sending None
is
the same as calling next
.) :
Now, recall that yield from
is available in Python 3. This allows us to delegate
coroutines to a subcoroutine:
And now we can delegate functionality to a sub-generator and it can be used
by a generator just as above:
You can read more about the precise semantics of yield from
in PEP 380.
The close
method raises GeneratorExit
at the point the function
execution was frozen. This will also be called by __del__
so you
can put any cleanup code where you handle the GeneratorExit
:
You can also throw an exception which can be handled in the generator
or propagated back to the user:
I believe I have covered all aspects of the following question:
What does the yield
keyword do in Python?
It turns out that yield
does a lot. I'm sure I could add even more
thorough examples to this. If you want more or have some constructive criticism, let me know by commenting
below.
The grammar currently allows any expression in a list comprehension.
Since yield is an expression, it has been touted by some as interesting to use it in comprehensions or generator expression - in spite of citing no particularly good use-case.
The CPython core developers are discussing deprecating its allowance.
Here's a relevant post from the mailing list:
On 30 January 2017 at 19:05, Brett Cannon wrote:
On Sun, 29 Jan 2017 at 16:39 Craig Rodrigues wrote:
I'm OK with either approach. Leaving things the way they are in Python 3
is no good, IMHO.
My vote is it be a SyntaxError since you're not getting what you expect from
the syntax.
I'd agree that's a sensible place for us to end up, as any code
relying on the current behaviour is really too clever to be
maintainable.
In terms of getting there, we'll likely want:
Cheers, Nick.
-- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia
Further, there is an outstanding issue (10544) which seems to be pointing in the direction of this never being a good idea (PyPy, a Python implementation written in Python, is already raising syntax warnings.)
Bottom line, until the developers of CPython tell us otherwise: Don't put yield
in a generator expression or comprehension.
In Python 2:
In a generator function, the return
statement is not allowed to include an expression_list
. In that context, a bare return
indicates that the generator is done and will cause StopIteration
to be raised.
An expression_list
is basically any number of expressions separated by commas - essentially, in Python 2, you can stop the generator with return
, but you can't return a value.
In Python 3:
In a generator function, the return
statement indicates that the generator is done and will cause StopIteration
to be raised. The returned value (if any) is used as an argument to construct StopIteration
and becomes the StopIteration.value
attribute.
The languages CLU, Sather, and Icon were referenced in the proposal
to introduce the concept of generators to Python. The general idea is
that a function can maintain internal state and yield intermediate
data points on demand by the user. This promised to be superior in performance
to other approaches, including Python threading, which isn't even available on some systems.
This means, for example, that xrange
objects (range
in Python 3) aren't Iterator
s, even though they are iterable, because they can be reused. Like lists, their __iter__
methods return iterator objects.
yield
was originally introduced as a statement, meaning that it
could only appear at the beginning of a line in a code block.
Now yield
creates a yield expression.
https://docs.python.org/2/reference/simple_stmts.html#grammar-token-yield_stmt
This change was proposed to allow a user to send data into the generator just as
one might receive it. To send data, one must be able to assign it to something, and
for that, a statement just won't work.
yield
is just like return
- it returns whatever you tell it to (as a generator). The difference is that the next time you call the generator, execution starts from the last call to the yield
statement. Unlike return, the stack frame is not cleaned up when a yield occurs, however control is transferred back to the caller, so its state will resume the next time the function.
In the case of your code, the function get_child_candidates
is acting like an iterator so that when you extend your list, it adds one element at a time to the new list.
list.extend
calls an iterator until it's exhausted. In the case of the code sample you posted, it would be much clearer to just return a tuple and append that to the list.
There's one extra thing to mention: a function that yields doesn't actually have to terminate. I've written code like this:
Then I can use it in other code like this:
It really helps simplify some problems, and makes some things easier to work with.
For those who prefer a minimal working example, meditate on this interactive Python session:
Yield gives you a generator.
As you can see, in the first case foo holds the entire list in memory at once. It's not a big deal for a list with 5 elements, but what if you want a list of 5 million? Not only is this a huge memory eater, it also costs a lot of time to build at the time that the function is called. In the second case, bar just gives you a generator. A generator is an iterable--which means you can use it in a for loop, etc, but each value can only be accessed once. All the values are also not stored in memory at the same time; the generator object "remembers" where it was in the looping the last time you called it--this way, if you're using an iterable to (say) count to 50 billion, you don't have to count to 50 billion all at once and store the 50 billion numbers to count through. Again, this is a pretty contrived example, you probably would use itertools if you really wanted to count to 50 billion. :)
This is the most simple use case of generators. As you said, it can be used to write efficient permutations, using yield to push things up through the call stack instead of using some sort of stack variable. Generators can also be used for specialized tree traversal, and all manner of other things.
It's returning a generator. I'm not particularly familiar with Python, but I believe it's the same kind of thing as C#'s iterator blocks if you're familiar with those.
There's an IBM article which explains it reasonably well (for Python) as far as I can see.
The key idea is that the compiler/interpreter/whatever does some trickery so that as far as the caller is concerned, they can keep calling next() and it will keep returning values - as if the generator method was paused. Now obviously you can't really "pause" a method, so the compiler builds a state machine for you to remember where you currently are and what the local variables etc look like. This is much easier than writing an iterator yourself.
There is one type of answer that I don't feel has been given yet, among the many great answers that describe how to use generators. Here is the programming language theory answer:
The yield
statement in Python returns a generator. A generator in Python is a function that returns continuations (and specifically a type of coroutine, but continuations represent the more general mechanism to understand what is going on).
Continuations in programming languages theory are a much more fundamental kind of computation, but they are not often used, because they are extremely hard to reason about and also very difficult to implement. But the idea of what a continuation is, is straightforward: it is the state of a computation that has not yet finished. In this state, the current values of variables, the operations that have yet to be performed, and so on, are saved. Then at some point later in the program the continuation can be invoked, such that the program's variables are reset to that state and the operations that were saved are carried out.
Continuations, in this more general form, can be implemented in two ways. In the call/cc
way, the program's stack is literally saved and then when the continuation is invoked, the stack is restored.
In continuation passing style (CPS), continuations are just normal functions (only in languages where functions are first class) which the programmer explicitly manages and passes around to subroutines. In this style, program state is represented by closures (and the variables that happen to be encoded in them) rather than variables that reside somewhere on the stack. Functions that manage control flow accept continuation as arguments (in some variations of CPS, functions may accept multiple continuations) and manipulate control flow by invoking them by simply calling them and returning afterwards. A very simple example of continuation passing style is as follows:
In this (very simplistic) example, the programmer saves the operation of actually writing the file into a continuation (which can potentially be a very complex operation with many details to write out), and then passes that continuation (i.e, as a first-class closure) to another operator which does some more processing, and then calls it if necessary. (I use this design pattern a lot in actual GUI programming, either because it saves me lines of code or, more importantly, to manage control flow after GUI events trigger.)
The rest of this post will, without loss of generality, conceptualize continuations as CPS, because it is a hell of a lot easier to understand and read.
Now let's talk about generators in Python. Generators are a specific subtype of continuation. Whereas continuations are able in general to save the state of a computation (i.e., the program's call stack), generators are only able to save the state of iteration over an iterator. Although, this definition is slightly misleading for certain use cases of generators. For instance:
This is clearly a reasonable iterable whose behavior is well defined -- each time the generator iterates over it, it returns 4 (and does so forever). But it isn't probably the prototypical type of iterable that comes to mind when thinking of iterators (i.e., for x in collection: do_something(x)
). This example illustrates the power of generators: if anything is an iterator, a generator can save the state of its iteration.
To reiterate: Continuations can save the state of a program's stack and generators can save the state of iteration. This means that continuations are more a lot powerful than generators, but also that generators are a lot, lot easier. They are easier for the language designer to implement, and they are easier for the programmer to use (if you have some time to burn, try to read and understand this page about continuations and call/cc).
But you could easily implement (and conceptualize) generators as a simple, specific case of continuation passing style:
Whenever yield
is called, it tells the function to return a continuation. When the function is called again, it starts from wherever it left off. So, in pseudo-pseudocode (i.e., not pseudocode, but not code) the generator's next
method is basically as follows:
where the yield
keyword is actually syntactic sugar for the real generator function, basically something like:
Remember that this is just pseudocode and the actual implementation of generators in Python is more complex. But as an exercise to understand what is going on, try to use continuation passing style to implement generator objects without use of the yield
keyword.
TL;DR
This was my first "aha" moment with yield.
yield
is a sugary way to say
build a series of stuff
Same behavior:
Different behavior:
Yield is single-pass: you can only iterate through once. When a function has a yield in it we call it a generator function. And an iterator is what it returns. That's revealing. We lose the convenience of a container, but gain the power of an arbitrarily long series.
Yield is lazy, it puts off computation. A function with a yield in it doesn't actually execute at all when you call it. The iterator object it returns uses magic to maintain the function's internal context. Each time you call next()
on the iterator (this happens in a for-loop) execution inches forward to the next yield. (return
raises StopIteration
and ends the series.)
Yield is versatile. It can do infinite loops:
If you need multiple passes and the series isn't too long, just call list()
on it:
Brilliant choice of the word yield
because both meanings apply:
yield — produce or provide (as in agriculture)
...provide the next data in the series.
yield — give way or relinquish (as in political power)
...relinquish CPU execution until the iterator advances.
Here is an example in plain language. I will provide a correspondence between high-level human concepts to low-level Python concepts.
I want to operate on a sequence of numbers, but I don't want to bother my self with the creation of that sequence, I want only to focus on the operation I want to do. So, I do the following:
This is what a generator does (a function that contains a yield
); it starts executing, pauses whenever it does a yield
, and when asked for a .next()
value it continues from the point it was last. It fits perfectly by design with the iterator protocol of Python, which describes how to sequentially request values.
The most famous user of the iterator protocol is the for
command in Python. So, whenever you do a:
it doesn't matter if sequence
is a list, a string, a dictionary or a generator object like described above; the result is the same: you read items off a sequence one by one.
Note that def
ining a function which contains a yield
keyword is not the only way to create a generator; it's just the easiest way to create one.
For more accurate information, read about iterator types, the yield statement and generators in the Python documentation.
While a lot of answers show why you'd use a yield
to create a generator, there are more uses for yield
. It's quite easy to make a coroutine, which enables the passing of information between two blocks of code. I won't repeat any of the fine examples that have already been given about using yield
to create a generator.
To help understand what a yield
does in the following code, you can use your finger to trace the cycle through any code that has a yield
. Every time your finger hits the yield
, you have to wait for a next
or a send
to be entered. When a next
is called, you trace through the code until you hit the yield
… the code on the right of the yield
is evaluated and returned to the caller… then you wait. When next
is called again, you perform another loop through the code. However, you'll note that in a coroutine, yield
can also be used with a send
… which will send a value from the caller into the yielding function. If a send
is given, then yield
receives the value sent, and spits it out the left hand side… then the trace through the code progresses until you hit the yield
again (returning the value at the end, as if next
was called).
For example:
There is another yield
use and meaning (since Python 3.3):
From PEP 380 -- Syntax for Delegating to a Subgenerator:
A syntax is proposed for a generator to delegate part of its operations to another generator. This allows a section of code containing 'yield' to be factored out and placed in another generator. Additionally, the subgenerator is allowed to return with a value, and the value is made available to the delegating generator.
The new syntax also opens up some opportunities for optimisation when one generator re-yields values produced by another.
Moreover this will introduce (since Python 3.5):
to avoid coroutines being confused with a regular generator (today yield
is used in both).
I was going to post "read page 19 of Beazley's 'Python: Essential Reference' for a quick description of generators", but so many others have posted good descriptions already.
Also, note that yield
can be used in coroutines as the dual of their use in generator functions. Although it isn't the same use as your code snippet, (yield)
can be used as an expression in a function. When a caller sends a value to the method using the send()
method, then the coroutine will execute until the next (yield)
statement is encountered.
Generators and coroutines are a cool way to set up data-flow type applications. I thought it would be worthwhile knowing about the other use of the yield
statement in functions.
Here are some Python examples of how to actually implement generators as if Python did not provide syntactic sugar for them:
As a Python generator:
Using lexical closures instead of generators
Using object closures instead of generators (because ClosuresAndObjectsAreEquivalent)
From a programming viewpoint, the iterators are implemented as thunks.
To implement iterators, generators, and thread pools for concurrent execution, etc. as thunks (also called anonymous functions), one uses messages sent to a closure object, which has a dispatcher, and the dispatcher answers to "messages".
http://en.wikipedia.org/wiki/Message_passing
"next" is a message sent to a closure, created by the "iter" call.
There are lots of ways to implement this computation. I used mutation, but it is easy to do it without mutation, by returning the current value and the next yielder.
Here is a demonstration which uses the structure of R6RS, but the semantics is absolutely identical to Python's. It's the same model of computation, and only a change in syntax is required to rewrite it in Python.
Here is a simple example:
Output:
I am not a Python developer, but it looks to me yield
holds the position of program flow and the next loop start from "yield" position. It seems like it is waiting at that position, and just before that, returning a value outside, and next time continues to work.
It seems to be an interesting and nice ability :D
Here is a mental image of what yield
does.
I like to think of a thread as having a stack (even when it's not implemented that way).
When a normal function is called, it puts its local variables on the stack, does some computation, then clears the stack and returns. The values of its local variables are never seen again.
With a yield
function, when its code begins to run (i.e. after the function is called, returning a generator object, whose next()
method is then invoked), it similarly puts its local variables onto the stack and computes for a while. But then, when it hits the yield
statement, before clearing its part of the stack and returning, it takes a snapshot of its local variables and stores them in the generator object. It also writes down the place where it's currently up to in its code (i.e. the particular yield
statement).
So it's a kind of a frozen function that the generator is hanging onto.
When next()
is called subsequently, it retrieves the function's belongings onto the stack and re-animates it. The function continues to compute from where it left off, oblivious to the fact that it had just spent an eternity in cold storage.
Compare the following examples:
When we call the second function, it behaves very differently to the first. The yield
statement might be unreachable, but if it's present anywhere, it changes the nature of what we're dealing with.
Calling yielderFunction()
doesn't run its code, but makes a generator out of the code. (Maybe it's a good idea to name such things with the yielder
prefix for readability.)
The gi_code
and gi_frame
fields are where the frozen state is stored. Exploring them with dir(..)
, we can confirm that our mental model above is credible.
All great answers, however a bit difficult for newbies.
I assume you have learned the return
statement.
As an analogy, return
and yield
are twins. return
means 'return and stop' whereas 'yield` means 'return, but continue'
Run it:
See, you get only a single number rather than a list of them. return
never allows you prevail happily, just implements once and quit.
Replace return
with yield
:
Now, you win to get all the numbers.
Comparing to return
which runs once and stops, yield
runs times you planed.
You can interpret return
as return one of them
, and yield
as return all of them
. This is called iterable
.
It's the core about yield
.
The difference between a list return
outputs and the object yield
output is:
You will always get [0, 1, 2] from a list object but only could retrieve them from 'the object yield
output' once. So, it has a new name generator
object as displayed in Out[11]: <generator object num_list at 0x10327c990>
.
In conclusion, as a metaphor to grok it:
Like every answer suggests, yield
is used for creating a sequence generator. It's used for generating some sequence dynamically. For example, while reading a file line by line on a network, you can use the yield
function as follows:
You can use it in your code as follows:
Execution Control Transfer gotcha
The execution control will be transferred from getNextLines() to the for
loop when yield is executed. Thus, every time getNextLines() is invoked, execution begins from the point where it was paused last time.
Thus in short, a function with the following code
will print
Yield is an object
A return
in a function will return a single value.
If you want a function to return a huge set of values, use yield
.
More importantly, yield
is a barrier.
like barrier in the CUDA language, it will not transfer control until it gets
completed.
That is, it will run the code in your function from the beginning until it hits yield
. Then, it’ll return the first value of the loop.
Then, every other call will run the loop you have written in the function one more time, returning the next value until there isn't any value to return.
yield
is like a return element for a function. The difference is, that the yield
element turns a function into a generator. A generator behaves just like a function until something is 'yielded'. The generator stops until it is next called, and continues from exactly the same point as it started. You can get a sequence of all the 'yielded' values in one, by calling list(generator())
.
(My below answer only speaks from the perspective of using Python generator, not the underlying implementation of generator mechanism, which involves some tricks of stack and heap manipulation.)
When yield
is used instead of a return
in a python function, that function is turned into something special called generator function
. That function will return an object of generator
type. The yield
keyword is a flag to notify the python compiler to treat such function specially. Normal functions will terminate once some value is returned from it. But with the help of the compiler, the generator function can be thought of as resumable. That is, the execution context will be restored and the execution will continue from last run. Until you explicitly call return, which will raise a StopIteration
exception (which is also part of the iterator protocol), or reach the end of the function. I found a lot of references about generator
but this one from the functional programming perspective
is the most digestable.
(Now I want to talk about the rationale behind generator
, and the iterator
based on my own understanding. I hope this can help you grasp the essential motivation of iterator and generator. Such concept shows up in other languages as well such as C#.)
As I understand, when we want to process a bunch of data, we usually first store the data somewhere and then process it one by one. But this intuitive approach is problematic. If the data volume is huge, it's expensive to store them as a whole beforehand. So instead of storing the data
itself directly, why not store some kind of metadata
indirectly, i.e. the logic how the data is computed
.
There are 2 approaches to wrap such metadata.
Either way, an iterator is created, i.e. some object that can give you the data you want. The OO approach may be a bit complex. Anyway, which one to use is up to you.
The yield
keyword simply collects returning results. Think of yield
like return +=
Many people use return
rather than yield
, but in some cases yield
can be more efficient and easier to work with.
Here is an example which yield
is definitely best for:
return (in function)
yield (in function)
Calling functions
Both functions do the same thing, but yield
uses three lines instead of five and has one less variable to worry about.
This is the result from the code:
As you can see both functions do the same thing. The only difference is return_dates()
gives a list and yield_dates()
gives a generator.
A real life example would be something like reading a file line by line or if you just want to make a generator.
In summary, the yield
statement transforms your function into a factory that produces a special object called a generator
which wraps around the body of your original function. When the generator
is iterated, it executes your function until it reaches the next yield
then suspends execution and evaluates to the value passed to yield
. It repeats this process on each iteration until the path of execution exits the function. For instance,
simply outputs
The power comes from using the generator with a loop that calculates a sequence, the generator executes the loop stopping each time to 'yield' the next result of the calculation, in this way it calculates a list on the fly, the benefit being the memory saved for especially large calculations
Say you wanted to create a your own range
function that produces an iterable range of numbers, you could do it like so,
and use it like this;
But this is inefficient because
Luckily Guido and his team were generous enough to develop generators so we could just do this;
Now upon each iteration a function on the generator called next()
executes the function until it either reaches a 'yield' statement in which it stops and 'yields' the value or reaches the end of the function. In this case on the first call, next()
executes up to the yield statement and yield 'n', on the next call it will execute the increment statement, jump back to the 'while', evaluate it, and if true, it will stop and yield 'n' again, it will continue that way until the while condition returns false and the generator jumps to the end of the function.
Here's a simple yield
based approach, to compute the fibonacci series, explained:
When you enter this into your REPL and then try and call it, you'll get a mystifying result:
This is because the presence of yield
signaled to Python that you want to create a generator, that is, an object that generates values on demand.
So, how do you generate these values? This can either be done directly by using the built-in function next
, or, indirectly by feeding it to a construct that consumes values.
Using the built-in next()
function, you directly invoke .next
/__next__
, forcing the generator to produce a value:
Indirectly, if you provide fib
to a for
loop, a list
initializer, a tuple
initializer, or anything else that expects an object that generates/produces values, you'll "consume" the generator until no more values can be produced by it (and it returns):
Similarly, with a tuple
initializer:
A generator differs from a function in the sense that it is lazy. It accomplishes this by maintaining it's local state and allowing you to resume whenever you need to.
When you first invoke fib
by calling it:
Python compiles the function, encounters the yield
keyword and simply returns a generator object back at you. Not very helpful it seems.
When you then request it generates the first value, directly or indirectly, it executes all statements that it finds, until it encounters a yield
, it then yields back the value you supplied to yield
and pauses. For an example that better demonstrates this, let's use some print
calls (replace with print "text"
if on Python 2):
Now, enter in the REPL:
you have a generator object now waiting for a command for it to generate a value. Use next
and see what get's printed:
The unquoted results are what's printed. The quoted result is what is returned from yield
. Call next
again now:
The generator remembers it was paused at yield value
and resumes from there. The next message is printed and the search for the yield
statement to pause at it performed again (due to the while
loop).
Yet another TL;DR
Iterator on list: next()
returns the next element of the list
Iterator generator: next()
will compute the next element on the fly (execute code)
You can see the yield/generator as a way to manually run the control flow from outside (like continue loop one step), by calling next
, however complex the flow.
Note: The generator is NOT a normal function. It remembers the previous state like local variables (stack). See other answers or articles for detailed explanation. The generator can only be iterated on once. You could do without yield
, but it would not be as nice, so it can be considered 'very nice' language sugar.
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?