Python Exercises: Solutions
Solutions of some of the exercises of this collection of programming exercises.
Median Value
def median_value (a, b, c): if a > b: if c > a: return a elif c > b: return c else: return b else: if c > b: return b elif c > a: return c else: return a
Leap Year
def leap_year(y): return y % 4 == 0 and (y % 100 != 0 or y % 400 == 0)
A number \(n\) is divisible by another number \(d\) when the remainder
of the integer division \(n/d\) is 0. In Python, you write this
Boolean condition as n % d == 0
.
Classify Triangle
def classify_triangle(a,b,c): # it is convenient to first sort the three lengths, A, B, and C, # so that we can refer to the maximal length as a if b > a: a, b = b, a # swap a <==> b if c > a: a, c = c, a # swap a <==> c if a > b + c: print ('impossible') else: if a*a > b*b + c*c: angle = 'obtuse' elif a*a < b*b + c*c: angle = 'acute' else: angle = 'right' if a == b and b == c: sides = 'equilateral' elif a == b or b == c or a == c: sides = 'isosceles' else: sides = 'scalene' print (angle, sides)
In order to decide whether a triangle is acute, right, or obtuse, and also to see whether it is impossible to build a triangle with the given segment lengths, it is convenient to identify the maximal length. Therefore, in the first part of the program, we shuffle the lengths around so that the maximal length ends up in \(a\). After that, we can check whether the “triangle inequality” is satisfied by simply checking that \(a < b + c\). Similarly, we can determine whether the triangle is acute, right, or obtuse, by comparing \(a^2\) and \(b^2+c^2\).
Minimum
def minimum (A): m = A[0] for i in range(1,len(A)): if A[i] < m: m = A[i] return m
Finding the minimum value in a sequence amounts to performing a full scan of the sequence, in which we progressively update the minimum value. In this sense, this algorithm is equivalent to computing any other “accumulator” function of the elements. For example, the sum of all the values in the sequence can be computed as follows:
def sum (A): s = 0 for a in A: s = s + a return s
In fact, with a min(a,b)
function that computes the minimum between
\(a\) and \(b\), we can also rewrite the algorithm as follows:
def minimum (A): m = A[0] for i in range(1,len(A)): m = min (m, A[i]) return m
Position in Sequence
def position_in_sequence (A, x): for i in range(len(A)): if x == A[i]: return i return -1
This is another classic linear-scan algorithm. In this case, we return as soon as we find the given value \(x\).
Count Lower
def count_lower (A,x): count = 0 for a in A: if a < x: count = count + 1 return count
Check Sorted
def check_sorted (A): for i in range(1,len(A)): if A[i] < A[i-1]: return False return True
Monotonic Sequence
def is_monotonic(A, i, j): assert i <= j and j < len(A) direction = 0 for k in range (i, j): if A[k] < A[k+1]: if direction == -1: return False direction = 1 elif A[k] > A[k+1]: if direction == 1: return False direction = -1 return True
One way to solve this problem is to scan the subsequence maintaining
the vertical direction that the sequence has up to the current index
\(k\). Thus the variable direction
can be \(-1\) indicating that the
sequence is monotonically decreasing, or \(+1\) indicating that the
sequence is monotonically increasing, or \(0\) indicating that the
sequence could be either monotonically decreasing or monotonically
increasing. We then check whether the sequence is increasing,
decreasing or constant at each position \(k\), and, if that is
inconsistent with the previous direction, we immediately return
False
, otherwise we set the direction accordingly.
def is_monotonic(A, i, j): assert i <= j and j < len(A) if A[i] < A[j]: # must be increasing for k in range (i, j): if A[k] > A[k+1]: return False elif A[i] > A[j]: # must be decreasing for k in range (i, j): if A[k] < A[k+1]: return False else: # must be constant for k in range (i, j): if A[k] != A[k+1]: return False return True
Another way to solve the problem is to first check whether the sequence must be increasing, decreasing, or constant. This can be determined by comparing the values at the end-points of the subsequence. We can then check whether the subsequence is monotonic in the given direction. For example, if \(A_i < A_j\), then the subsequence is monotonic only if it does not decrease in any position \(k\) between \(i\) and \(j\).
def is_monotonic(A, i, j): assert i <= j and j < len(A) # first check whether it is non-decreasing non_decreasing = True for k in range (i, j): if A[k] > A[k+1]: non_decreasing = False break if non_decreasing: return True # otherwise check whether it is non-increasing for k in range (i, j): if A[k] < A[k+1]: return False return True
Yet another possible solution is one where we use two simple scans:
the first one checks whether the subsequence is non-decreasing, in
which case we immediately return True
, otherwise we go into the
second one that checks whether the subsequence is non-increasing.
All these solutions perform either one or in the worst case two linear scans of the subsequence, and therefore have an \(O(\ell)\) complexity, where \(\ell = j - i\) is the length of the given subsequence of \(A\).
Explain, Analyze, and Improve an Algorithm! (Algorithm-X)
Question 1
algorithm_x(A)
returns the maximal length of any monotonic
subsequence of \(A\).
Question 2
algorithm_x(A)
runs in time \(\Theta(n^3)\). The two main nested
loops essentially iterate over all pairs of indexes \((i,j)\) such
that \(i \le j\). In essence, that is an iteration over all the
subsequences of \(A\) of at least two elements, starting at \(i\) and
ending at \(j\). Then, for each one of these subsequences, the
algorithm invokes the is_monotonic
algorithm, which performs a
linear scan of the subsequence. Therefore, roughly but intuitively,
there are \(\Theta(n^2)\) subsequences, and the average length of the
subsequences is \(\Theta(n)\), which yields a total complexity of
\(\Theta(n^3)\).
Another, interesting and also a bit more precise way to look at the
complexity of algorithm_x(A)
is to again see the two main loops as a
choice of two indexes \(1\le i \le j \le n\) that then define a
subsequence \(A_{i,j}\) of \(A\), and then to see the iteration over
\(A_{i,j}\) performed by the is_monotonic
algorithm as another
choice, of a third index \(k\) between \(i\) and \(j\). In total, the time
complexity is therefore proportional to how many times you can choose
three indexes \(1\le i \le k\le j\le n\): \[T(n) \sim {n\choose
3}=\Theta(n^3).\]
In case you haven’t seen it before, the expression \({n\choose 3}\) is pronounced “\(n\) choose \(3\)”, and it is the number of ways you can choose \(3\) distincts elements out of a set of \(n\) elements. In general, if repetitions are allowed in your choice, then the number of ways to choose \(k\) out of \(n\) elements is exactly \(n^k\). However, in terms of the big-oh notation, there is no difference between a choice with or without repetition. That is, \({n\choose k}=\Theta(n^k)\).
Question 3
def better_algorithm_x (A): return linear_algorithm_x (A)
See the solution to Question 4 below.
Question 4
def linear_algorithm_x (A): if len(A) == 0: return 0 m = 1 begin = 0 # first check non-decreasing subsequences for i in range (1, len(A)): if A[i-1] > A[i]: begin = i if i - begin + 1 > m: m = i - begin + 1 begin = 0 # then check non-increasing subsequences for i in range (1, len(A)): if A[i-1] < A[i]: begin = i if i - begin + 1 > m: m = i - begin + 1 return m
The main idea for this linear algorithm is that maximal non-decreasing subsequences do not overlap. In other words, \(A\) consists of a sequence (potentially empty) of non-overlapping, locally maximal, non-decreasing subsequences, which are separated by decreasing subsequences. For example, the sequence \(A=1,7,7,8,6,5,6,9\) contains two locally maximal, non-decreasing subsequences, \(S_1=1,7,7,8\) and \(S_2=5,6,9\), separated by the decreasing subsequence \(8,6,5\). This is true by definition: a non-decreasing subsequence is “maximal” (locally) if it can not be further extended to a longer, still non-decreasing subsequence. So, if two non-decreasing subsequences would overlap, then that means that they both could be combined in another, longer subsequence, which means that one or both of the initial subsequences is not maximal.
The key insight is that any point \(A_i\) in \(A\) belongs to exactly one locally maximal non-decreasing subsequence, so all the non-decreasing subsequences can be identified by a single linear scan in which at each point \(i\) in the input array, we know the index \(\mathit{begin} \le i\) where the current maximal non-decreasing subsequence starts. As we scan the sequence, we move the beginning index \(\mathit{begin}\) to the current position \(i\) whenever \(i\) is the end-point of a decreasing segment.
All of this is true, by analogy, for non-increasing subsequences. Although notice that maximal non-increasing subsequences may overlap with non-decreasing subsequences. For example, in \(A=1,3,3,3,2\), there is one maximal non-decreasing subsequence, \(1,3,3,3\), and one maximal non-increasing subsequence, \(3,3,3,2\), which do overlap.
The algorithm therefore performs two separate scans, one that identifies the maximal non-decreasing subsequence, and one that then looks for a longer non-increasing subsequence. The complexity of these two linear scans is \(\Theta(n)\).
The two separate scans can also be combined into a single loop, just by keeping two separate \(\mathit{begin}\) variables, for the current non-decreasing and non-increasing sequence, respectively.
def linear_algorithm_x (A): if len(A) == 0: return 0 m = 1 begin_up = 0 begin_down = 0 for i in range (1, len(A)): if A[i-1] > A[i]: begin_up = i elif A[i-1] < A[i]: begin_down = i if i - begin_up + 1 > m: m = i - begin_up + 1 if i - begin_down + 1 > m: m = i - begin_down + 1 return m