I want to do something similar to this:

```
>>> x = [1,2,3,4,5,6,7,8,9,0]
>>> x
[1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
>>> y = [1,3,5,7,9]
>>> y
[1, 3, 5, 7, 9]
>>> y - x # (should return [2,4,6,8,0])
```

But this is not supported by python lists What is the best way of doing it?

Use a list comprehension:

If you want to use the

`-`

infix syntax, you can just do:you can then use it like:

But if you don't absolutely need list properties (for example, ordering), just use sets as the other answers recommend.

Use set difference

Or you might just have x and y be sets so you don't have to do any conversions.

That is a "set subtraction" operation. Use the set data structure for that.

In Python 2.7:

Output:

if duplicate and ordering items are problem :

`[i for i in a if not i in b or b.remove(i)]`

For many use cases, the answer you want is:

This is a hybrid between aaronasterling's answer and quantumSoup's answer.

aaronasterling's version does

`len(y)`

item comparisons for each element in`x`

, so it takes quadratic time. quantumSoup's version uses sets, so it does a single constant-time set lookup for each element in`x`

—but, because it convertsboth`x`

and`y`

into sets, it loses the order of your elements.By converting only

`y`

into a set, and iterating`x`

in order, you get the best of both worlds—linear time, and order preservation.*However, this still has a problem from quantumSoup's version: It requires your elements to be hashable. That's pretty much built into the nature of sets.** If you're trying to, e.g., subtract a list of dicts from another list of dicts, but the list to subtract is large, what do you do?

If you can decorate your values in some way that they're hashable, that solves the problem. For example, with a flat dictionary whose values are themselves hashable:

If your types are a bit more complicated (e.g., often you're dealing with JSON-compatible values, which are hashable, or lists or dicts whose values are recursively the same type), you can still use this solution. But some types just can't be converted into anything hashable.

If your items aren't, and can't be made, hashable, but they are comparable, you can at least get log-linear time (

`O(N*log M)`

, which is a lot better than the`O(N*M)`

time of the list solution, but not as good as the`O(N+M)`

time of the set solution) by sorting and using`bisect`

:If your items are neither hashable nor comparable, then you're stuck with the quadratic solution.

_{* Note that you could also do this by using a pair of OrderedSet objects, for which you can find recipes and third-party modules. But I think this is simpler.}_{** The reason set lookups are constant time is that all it has to do is hash the value and see if there's an entry for that hash. If it can't hash the value, this won't work.}Looking up values in sets are faster than looking them up in lists:

I believe this will scale slightly better than:

Both preserve the order of the lists.

Try this.

If the lists allow duplicate elements, you can use Counter from collections:

If you need to preserve the order of elements from x:

The answer provided by @aaronasterling looks good, however, it is not compatible with the default interface of list:

`x = MyList(1, 2, 3, 4)`

vs`x = MyList([1, 2, 3, 4])`

. Thus, the below code can be used as a more python-list friendly:Example:

The other solutions have one of a few problems:

`x = [1, 2, 2, 2]`

and`y = [2, 2]`

they convert`y`

to a`set`

, and either remove all matching elements (leaving`[1]`

only) or remove one of each unique element (leaving`[1, 2, 2]`

), when the proper behavior would be to remove`2`

twice, leaving`[1, 2]`

, or`O(m * n)`

work, where an optimal solution can do`O(m + n)`

workAlain was on the right track with

`Counter`

to solve #2 and #3, but that solution will lose ordering. The solution that preserves order (removing the first`n`

copies of each value for`n`

repetitions in the`list`

of values to remove) is:Try it online!

To make it remove the

lastcopies of each element, just change the`for`

loop to`for val in reversed(x):`

and add`out.reverse()`

immediately after exiting the`for`

loop.Constructing the

`Counter`

is`O(n)`

in terms of`y`

's length, iterating`x`

is`O(n)`

in terms of`x`

's length, and`Counter`

membership testing and mutation are`O(1)`

, while`list.append`

is amortized`O(1)`

(a given`append`

can be`O(n)`

, but for many`append`

s, the overall big-O averages`O(1)`

since fewer and fewer of them require a reallocation), so the overall work done is`O(m + n)`

.You can also test for to determine if there were any elements in

`y`

that were not removed from`x`

by testing:## I think this is faster:

I think the easiest way to achieve this is by using set().

This example subtracts two lists: