A coding impromptu
This post is a rolling collection of algorithms and computational ideas I like, implemented in BQN:
Table of Contents
Extrapolating Perlis' remark1, it's likely that a group of 50 individuals would devise 35 to 40 distinct solutions to even the simplest problem in BQN. Therefore, I will frequently juxtapose my implementations with those of seasoned BQNators2.
The infamous _while_
modifier
I call it infamous because it made me feel stupid twice: first when I encountered it in code, and again when I read its docs. To understand its behaviour, you need to be familiar with quite a bit of BQN, especially the functional programming and combinators aspects. As a newbie at the time, I found it quite daunting. It took me about five scattered attempts over several months to get it. Looking back, the difficulty wasn't so much BQN's syntax, but my struggle to express complex recursion, which modifier recursion definitely is.
An unrolling of the first two steps reveals that up to 2βn
evaluations of π½
can occur at recursion
level n
. This is derived by noting that, within the BQN combinator, the left function of the
rightmost atop dictates that the π½
for the subsequent step is, in accordance to π½_π£_πΎ
:
_w0_ β {π½βπΎβπ½_π£_πΎβπ½βπΎπ©} _w1_ β {(π½βπΎβπ½)βπΎβ(π½βπΎβπ½)_w0_πΎβ(π½βπΎβπ½)βπΎπ©} _w2_ β {((π½βπΎβπ½)βπΎβ(π½βπΎβπ½))βπΎβ((π½βπΎβπ½)βπΎβ(π½βπΎβπ½))_w0_πΎβ((π½βπΎβπ½)βπΎβ(π½βπΎβπ½))βπΎπ©}
Another way to clarify the concept is to implement the same logic both as a function and as a 1-modifier, and then compare these implementations with the two 2-modifiers (one exhibiting linear and the other a logarithmic number of stack frames):
Whiles β {FβΏGππ©: Wfun β {πβGβπΛβΈπβπβGπ©} _wom β {π½βGβπ½_π£βπ½βGπ©} _wtmlog_ β {π½βπΎβπ½_π£_πΎβπ½βπΎπ©} _wtmlin_ β {πβπ½βπΎπ©} β¨f Wfun π©, f _wom π©, f _wtmlog_ g π©, f _wtmlin_ g β"SO"π©β© }
Letβs test it with a simple iteration that exceeds CBQNβs recursion limit, triggering a stack overflow:
β¨1βΈ+, 5000βΈβ₯β© Whiles 0
β¨ 5001 5001 5001 "SO" β©
Z algorithm
This is a very efficient procedure that finds prefix strings in linear time. The imperative implementation reads:
ZI β {πs: lβΏrβΏz β 0β0 0βΏ0βΏs z β£ { v β r(β’1βΈ+β’_while_{(π©+π¨)<β s ? =Β΄β¨π©,π©+π¨β©βΒ¨<s ; 0}<βΆ({zβΛπ©-l}β-+1)βΏ0)π© r <βΆ@βΏ{π: lβ©π©-v+1 β rβ©π©} π©+v-1 z vβΎ(π©βΈβ)β© }Β¨ ββ s } ZI "abacabadabacaba"
β¨ 15 0 1 0 3 0 1 0 7 0 1 0 3 0 1 β©
Two algorithmic improvements can be made here, namely only iterate over indices where the character found is equal to the first character, and only search to extend the count if it goes up to the end of r:
ZFun β {πs: CountEq β { 1βΈ+β’_while_((β π¨)βΈβ€βΆβ¨ββπ¨β‘ββπ©,0β©) 0 } lβrβ0 β Ulr β {(rββ©π¨+π©)>r ? lβ©π¨ β π©; π©} SearchEq β β£ Ulr β’ + + CountEqβ(ββs) β’ Set β {iππ©: ((r-i) (i SearchEq 0ββ£)ββ€ (i-l)βπ©)βΎ(iβΈβ) π© } (β½1β/ββΈ=s) SetΒ΄Λ βΛβ s }
I came up with two array versions, with quadratic and cubic time complexities respectively:
ZAQ β Β―1ββ(+´·β§`β£=β βΈβ)Β¨< ZAC β (+Β΄β§`)Β¨<=βββ {Β«βπ¨π©}β< (ZAQβ‘ZAC)βΆ@βΏZAC "abacabadabacaba"
β¨ 15 0 1 0 3 0 1 0 7 0 1 0 3 0 1 β©
With further refinements, the earlier solutions can be transformed into:
ZAQβΏZAC β {(+Β΄β§`)Β¨π}Β¨ β¨β ββ=β½ββ, <=Β«β(βββ )β©
Longest increasing sub-sequence
This problem can be solved in \(O(n\log n)\) using dynamic programming. Here is an imperative implementation which is quadratic, but can be optimized:
LISI β { kβΏdp β Β―1βΏ(βΒ¨π©) {i β β§Β΄βΆ(βββ0)βΏ{π:k+β©1} dp<π© β dp π©βΎ(iβΈβ)β©}Β¨ π© +Β΄β>dp } LISIΒ¨ β¨0βΏ1βΏ0βΏ3βΏ2βΏ3, 10βΏ9βΏ2βΏ5βΏ3βΏ7βΏ101βΏ18, 7βΏ7βΏ7βΏ7βΏ7β©
β¨ 4 4 1 β©
A more elegant array solution is:
LISA β +Β΄ββ βΒ¨{π¨βΎ((βπ©βπ¨-1)βΈβ)π©}Β΄β½ LISAΒ¨ β¨0βΏ1βΏ0βΏ3βΏ2βΏ3, 10βΏ9βΏ2βΏ5βΏ3βΏ7βΏ101βΏ18, 7βΏ7βΏ7βΏ7βΏ7β©
β¨ 4 4 1 β©
Let's )explain
this optimized version, so we can truly appreciate its beauty:
+Β΄ββ βΒ¨{π¨βΎ((βπ©βπ¨-1)βΈβ)π©}Β΄β½ β β β ββ β β β β β β β β β {βΌβββββΌββΌββΌβββΌββΌβΒ΄β β β βΒ¨ β β β β β β ββ β β βββΌβββββΌββΌββΌβββΌββΌββΌβ½ β ββ ββββΌβββββΌββΌββΌβββΌββΌββ +Β΄ β β β β β β β βββ β β β β β β β β π¨-1 β β β π©βββ β β β βββ β β β ββββββββΈβ β π¨βΎββββββββββ β βββββββββββββπ© βΆβββββββββ
The full expression is structured as a two-train: we sum all finite entries from the
result of the rightmost three-train. The three-train is a right fold over the reversed
input, with an initial array of β
and the same length as the input. In each step
of the fold, we modify the right argument using under: we perform a binary search
with strict comparison to find where the next element should go.
The element is either placed at the end of the unfilled region, or it replaces
the first element that is greater than π¨
. Since BQN uses a based array model,
we pick the enclosed atom from this operation's result. So it goes3.
N-queens problem
This problem is the archetypal example of backtracking. Initially, I tried to solve it using a function to place the queens in the full board, hoping that it would lead to a more array oriented solution:
8 {((β¨βΒ΄0βΈ=)β¨(0=-βΒ΄)β¨0=+βΒ΄) π©-Β¨<βπ¨} 2βΏ3
ββ β΅ 0 1 0 1 0 1 0 0 0 0 1 1 1 0 0 0 1 1 1 1 1 1 1 1 0 0 1 1 1 0 0 0 0 1 0 1 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 β
This resulted in a more complicated algorithm, so I decided to go for the classical Wirth implementation:
NQ β {πn: VβΏP β {β£π(β’βΎ-β+)Β΄ββ’}Β¨ β¨β¨Β΄βΒ¨Λ, {1βΎ(π©βΈβ)π¨}Β¨β© {nβ π© ? +Β΄(π¨Vβ’)βΆβ¨(π©+1)πΛπ¨Pβ’,0β©β(π©ββ’)Β¨ βn ; 1 }ΛΒ΄ (0β0ΓΒ·βΒ¨β’βΎΒ·βΛ+Λ)n }
Which nicely compares with the OEIS sequence:
a000170 β 1βΏ0βΏ0βΏ2βΏ10βΏ4βΏ40βΏ92 a000170 β‘ NQΒ¨ 1+β8
1
And of course, in the implementation above I could have used a single array instead of three, but I find the resulting validation and position functions very aesthetic the way they are.
Majority element
The BoyerβMoore algorithm allows for finding the majority element (element that appears
more than βπ©Γ·2
times in the array) in linear time. If such element exists, then it is
equal to the mode of the data, and for this task we have a nice array solution. The original
implementation could be expressed as:
BM β {vβ0 β Iββ’β£=βΆ{π:v+β©1}βΏ{π:v-β©1} β 0{π:v=0}βΆβ¨I,IΛβ£β©Β΄π©} BM 6βΏ1βΏ3βΏ1βΏ3βΏ3βΏ4βΏ3βΏ3βΏ5
3
The previous fold tracks the majority element as state, a more elegant approach maintains the number of votes:
BM β {eβ@ β 0{π©=0 ? eβ©π¨β1 ; π©+Β―1βeβ’π¨}Β΄π© β e} BM 6βΏ1βΏ3βΏ1βΏ3βΏ3βΏ4βΏ3βΏ3βΏ5
3
An identity on the naturals
Some time ago, while working on performance optimization of linear algebra operations with Boolean arrays, I encountered an interesting summation property for an array \(a\) of length \(n\):
\begin{equation*} \sum_{i | a_i \neq 0} \sum_{j=i+1} f_j = \sum_{j=0} f_j \sum_{i < j | a_i \neq 0} 1 \end{equation*}It turns out that the RHS can be elegantly transformed into a scan, giving rise to a beautiful identity that applies to all natural numbers, not just Booleans as I initially thought:
(+`β‘Β·+Β΄/β€β<βΛ) β’rand.RangeΛ 1e3
1
This identity holds because βΛ
represents the indices i
of the list,
and since +Β΄(/π©)=i ββ iβπ©
, the fold sums all the elements in π©
up to i
, for
i
in the range of the length of the list. Ergo, a scan.
Depth of nested lists
Studying tree algorithms in APL, I learned about the depth vector representation. If the nested object in consideration is a string, the best approach is using boolean masks. However, when dealing with a BQN list, recursion becomes necessary to determine the depth of nested elements. Hereβs how it can be implemented:
{=βΆβ¨β0, 1+Β·βΎπΒ¨β©π©} β¨1, β¨2, β¨3β©, β¨4, β¨5, β¨6, 7β©β©β©β©, 1β©
β¨ 1 2 3 3 4 5 5 1 β©
H-index
This metric is one of the reasons for the deplorable state of modern academia, and the headaches for outsiders trying to get in. Consider that Peter Higgs has an estimated h-index of only 12. By contrast, a random professor nowadays boasts an h-index ten times as high, and exponentially less impact. Enough of ranting, let's concentrate on finding an elegant way to implement this useless thing:
HL β (+Β΄βΒ«βΛβ€+`βΎβ½)Β·/βΌβ βΈβ HS β +Β΄β¨β₯1+βΛ (HLβ‘HS)βΆ@βΏHL 14βΏ14βΏ11βΏ9βΏ5βΏ5βΏ1βΏ1βΏ1βΏ1βΏ0
5
If someone ever published that much, sorting will eventually be slower:
HLβΏHS {πβ’_timedπ©}Β¨< 1e8 β’rand.Range 1e3
β¨ 0.083824959 0.21801262700000001 β©
A testament to the idea that the simplest solution in BQN is often the most efficient:
I initially clip my citations array with {β Β¨ββ βπ©Β¨βΎ(β₯ββ βπ©βΈ/)π©}
, which is just /βΌβ βΈβ
.
Trapping rain water
This is a classical interview problem that can be solved in linear time. Interestingly, it admits a very elegant array solution:
(+Β΄β’-Λβ`βΎβ½ββ`) [0,1,0,2,1,0,1,3,2,1,2,1]
6
That is, we take the minimum of max-scans from the left and from the right, and subtract the corresponding height. Reducing the resulting array gives the amount of trapped water.
A closely related problem is container with most water, which unfortunately is not so easy to solve in linear time using an array approach (one can easily implement the imperative two pointers solution in BQN, but it will probably be slow). Here are two solutions, one \(O(n^2)\) and the other \(O(n\log n)\), both tacit:
β¨βΒ΄ββ₯ββΛΓΒ·-βΛβΛ, βΒ΄β¨Γ(β`βΈ-ββ’-β`)βββ© {10 πβ’_timedπ©}Β¨< β’rand.RangeΛ1e4
β¨ 0.080050875 4.14558eΒ―5 β©
Computing edit distances
The Levenshtein (or edit) distance is a measure of the similarity between two strings. It is defined by the following recurrence, which is the basis of dynamic programming algorithms like Wagner-Fisher:
\begin{align*} d_{i0} &= i, \quad d_{0j} = j, \\ d_{ij} &= \min \begin{cases} d_{i-1,j-1} + \mathbf{1}_{s_i \neq t_j} \\ d_{i-1,j} + 1 \\ d_{i,j-1} + 1 \end{cases} \end{align*}There is an elegant implementation of a variation of the WagnerβFischer algorithm in the BQNcrate. It has been particularly challenging for me to understand itβnot due to the clarity of the primitives, but rather because of the clever transformation employed. I believe that this variant can be derived by shifting the distance matrix. Given two strings \(s\) and \(t\) of lengths \(n\) and \(m\), respectively, we define a new distance matrix as follows:
\begin{equation*} p_{ij} = d_{ij} + n - i + m - j \end{equation*}Under this transformation, the recurrence relation becomes:
\begin{align*} p_{i0} &= p_{0j} = m + n, \\ p_{ij} &= \min \begin{cases} p_{i-1,j-1} + \mathbf{1}_{s_i \neq t_j} - 2 \\ p_{i-1,j} \\ p_{i,j-1} \end{cases} \end{align*}The above recurrence can be easily identified in the 3-train's middle function, which is folded over the table of the costs (table comparing the characters). Note that we compare insertions and substitutions, and then we can do a min scan over the result to get the deletions, which gives a vectorised implementation.
The only part I can't quite piece together is the construction of the cost table,
which is done by reversing \(t\). Given that the final result for \(p_{ij}\) β is located
in the bottom-right corner and we use foldr
, I would have expected \(s\) to be the
one reversed instead. However, both approaches work, as demonstrated by the following code:
_l β {Β―1β(1βΈ+β₯+)ββ (β`β’βββΈΒ»ββ’-0βΎ1+β£)Λπ½} T β β½βΈ(=β)_lβ‘=βββ½_l Tβ{@+97+π©β’rand.Range 25}Β΄ 1e4βΏ1e5
1
I suspect the above can be explained by the following properties of the Levenshtein distance:
- \(L(s,t) = L(t,s)\)
- \(L(s,t) = L(\text{rev}(s),\text{rev}(t))\)
- \(L(\text{rev}(s),t) = L(s,\text{rev}(t))\)
If you know why both formulations work, please let me know!
Solving the cubic equation
This function computes the real roots of an arbitrary cubic equation. Initially, the equation is transformed into its depressed form via an appropriate substitution. Depending on the sign of the discriminant, the roots are then determined using Cardano's method when the discriminant is positive, or ViΓ¨teβs trigonometric method when it is negative. In the case where the discriminant is zero, the proportionality to the square of the Vandermonde polynomial implies that a repeated root is present, the roots are resolved through direct analytical methods. We have chosen those methods to avoid using complex numbers, which are not yet supported in BQN.
Cub β {aβΏbβΏcβΏd: (bΓ·3Γa)-Λβ’math{ π©>0 ? +Β΄π©(π.Cbrt+β-)ββΛ-qΓ·2; π©=0 ? 0βΈ=βΆβ¨Β―1βΏ2βΏ2Γ·ΛΒ·π.CbrtΓβ4,3βΈβ₯β©q; (2Γβ-pΓ·3)Γπ.Cos(2ΓΟΓββΈΓ·3)-Λ3Γ·Λπ.Acos(β-3Γ·p)Γ1.5ΓqΓ·p }(27Γ·Λpβ3)+4Γ·ΛΓΛqβ(dΓ·a)-(27Γ·Λ3βΛbΓ·a)+3Γ·ΛbΓaΓ·Λpβ(cΓ·a)-3Γ·ΛΓΛbΓ·a }
The above implementation only works for the case where aβ’0
, it will yield NaN
otherwise.
Here are some tests for the four possible branches:
CubΒ¨ β¨1βΏ0βΏΒ―7βΏ6, 1βΏΒ―1βΏΒ―8βΏ12, 1βΏΒ―6βΏ12βΏΒ―8, 1βΏ3βΏ0βΏΒ―1β©
β¨ β¨ 2.0000000000000004 1 Β―3.0000000000000004 β© β¨ Β―2.9999999999999996 1.9999999999999998 1.9999999999999998 β© β¨ 2 2 2 β© β¨ 0.532088886237956 Β―0.6527036446661387 Β―2.879385241571817 β© β©
QR decomposition
I put some effort golfing this QR decomposition implementation, and I got a very satisfying 98 chars one-liner. Ungolfed a bit, it looks like this:
QR β +ΛβΓβ1βΏβ{ 1=β’Β΄β’π© ? π©βΈΓ·βββΈββ+ΛΓΛπ©; βΎΛ{(qπ½π¨)β(rπ½t)βΎ0π½βkπ©}Β΄ππ½{π-π©π½tβ©π©ββΈπ½π}(kβΛπ©)βqβΏrβππ©βΛΛkββ2Γ·Λβ’Β΄β’π©β£tβ@ }
The function works like this: it recursively computes the QR decomposition of a matrix by first handling the base case (normalizing a single column) then splitting the matrix into two halves. The first half is decomposed into \(Q_0\) and \(R_0\), and the second half is orthogonalized against \(Q_0\) by subtracting its projection, yielding a residual matrix that is itself decomposed into \(Q_1\) and \(R_1\). Finally, the overall orthogonal matrix \(Q\) is formed by horizontally concatenating \(Q_0\) and \(Q_1\), and the upper triangular \(R\) is assembled as a block matrix combining \(R_0\), the projection coefficients, and \(R_1\):
\begin{equation*} Q \, R = \begin{pmatrix} Q_0 & Q_1 \end{pmatrix} \begin{pmatrix} R_0 & T \\ 0 & R_1 \end{pmatrix} = Q_0 R_0 + Q_0 T + Q_1 R_1, \end{equation*}We can test it with random matrices:
(β’βΎβ<m-+ΛβΓβ1βΏβΒ΄) QR m β 3βΏ3β’rand.Range 0
ββ Β· ββ ββ ββ β΅ 0.8157427013276365 Β―0.577946856084976 0.02326535562123689 β΅ 0.9106163258394209 0.7411115590785274 0.7652096291273813 β΅ 0 0 0 0.32843727859545113 0.4297133155667652 Β―0.8411155809122974 0 0.709988720748101 0.15322713799622295 0 0 0 0.476122672490509 0.6937751061879561 0.5403547934222346 0 0 0.36577814222564664 0 0 0 β β β β
Fast Fourier Transform
Below is an implementation of the radix-2 CooleyβTukey FFT algorithm. The function leverages BQN's headers to define the inverse transform in a succinct way using the property:
\begin{equation*} \text{iFFT}[\mathbf{x}] = \frac{1}{N}\text{FFT}^{*}[\mathbf{x}^{*}] \end{equation*}
We also define a namespace for dealing with complex numbers, in particular the Cis
function:
z β { _p β {(-Β΄π½Β¨)β(+Β΄π½Β¨)ββ½} CβΏE β β¨ββ-Β΄Λ, β’math{π.CosβΛπ.Sin}β© } FFT β {πβΌ: z.C{β Γ·ΛΒ·π½πΎβπ½}ππ©; (1=β )βΆβ¨(+βΎ-)β(β’Γz._pΛΒ·z.Eβ-ΟΓββΈΓ·ββ )Β΄(πΒ¨β’βΛ2|βΛ), β’β©π©}
Let's confirm that the inverse returns back the original list:
(+Β΄ββ₯β’-FFTβΌβFFT) 0β’rand.RangeΛ2βΛ2β10
1.914614300435602eΒ―14
We could also compare with the discrete Fourier transform, which despite being \(O(N^2)\)
has should have a nice array formulation:
DFT β βΛΒ΄<Λ{π½ββ+ΛβΓβ1βΏβ z._pΛΒ·π½1βΏ0ββΌΒ·z.E Β―2ΓΟΓβ Γ·ΛΒ·ΓβΛβΛ} (+Β΄ββ₯FFT-DFT) 0β’rand.RangeΛ2βΛ2β10
Β―2.8412011632283907eΒ―10
In the DFT code above, I got into a big mess with the complex numbers, because the
z
namespace was too tightly coupled with the FFT implementation. I had to do a
bunch of enclosing and coupling to get the same shape. With proper complex numbers
support it would be something like:
DFT β β’+ΛβΓβ1βΏβΛΒ·βΒ―2ΓΟΓβ Γ·ΛΒ·ΓβΛβΛ
Tensor n-mode product
The n-mode product is a key ingredient for computing the Tucker decomposition of a tensor.
For this we can use the HOSVD algorithm: a method that has been rediscovered several times.
For example, in the nuclear quantum dynamics community it is known as POTFIT
and
was published before the often cited De Lathauwer paper, see arXiv:1309.5060 for a discussion.
For a tensor \(\mathcal{X}\) and a matrix \(U\) we define:
In BQN's parlance, we can express it as:
{+ΛβΓβ1βΏββπ©βΎ(ββπ)π¨}
A beautiful example of notation as a tool of thought, in my opinion: this deferred 1-modifier (itself a compact melange of six modifiers) computes the π-mode product of a tensor π¨ and a matrix π©. It works by moving the π-axis to the front, then multiplying π¨ and π© without the need for explicit unfolding, courtesy of the rank operator, and moving the last axis of the result back to π, all gracefully managed by under.
Footnotes:
Almost Perfect Artifacts Improve only in Small Ways: APL is more French than English, Alan J. Perlis (1978). From jsoftware's papers collection.
Initially, I intended to rigorously attribute all contributions, but this quickly filled the text with footnotes. I often get help streamlining my solutions from Marshall Lochbaum (the BQN creator), dzaima (the CBQN developer), and other fine folks from the BQN matrix room, thank you all! Please check the logs for more context.