Bogo sort is a very stupid way to sort an array I often refer to as a joke.
However, conceptually it is not that totally stupid : this approach is very parallelizable. If you have $n!$ computers, you could check every cases at once for a cost of $\mathcal{O}(n)$. Even better, if you have $\log_2(n)$ computers for each array, then you could check the order in $\mathcal{O}(\log_2(n))$ every single configuration at once.
This is assuming non-contiguous memory access to not be a problem... and forgetting about copying first the whole array into $n!\log_2(n)$ computers (sort of, because it could actually be less)... and forgetting about machine to machine communication (maybe it was a big GPU from the beginning)
Once the computation is done, we need to find a possible solution (maybe it is not unique), which could cost $1$ if every computer can broadcast from its own channel to every other computers at once, which is slightly optimistic.
So I guess the world needs a bogo integrator™ that is the equivalent of bogo sort.
The idea would be that given :
⋅ $f:ℝ→A$ (where $A$ is anything such that "integrating $f$" makes sense)
⋅ $N$ the step count (integrator precision)
⋅ $x_0$ and $x_N$ the range of integration
⋅ a distribution function on $A$ for "random generation" (could be $\mathcal{N}(0,1)$ when $A≈ℝ^p$)
⋅ $K$ attempt count
we get an output $F$ that is an array of size $N$ where $F[n]≈∫_{x_0}^{x_n}f(x)dx$
(we could also discard all the detail and only output $F[N]$)
def diff(f,δt):
return [ (f[n+1]-f[n]) / δt for n in range(len(f)-1)]
def bogo_integral(f,x_0,x_1,N,dist,K=ℵ₁):
δt = (x_1-x_0)/N # needed to compute diff
target = [f(x) for x in linspace(x_0,x_1,N)] # diff(best,δt) needs to match this
best = [dist() for n in range(N)] # start with a random pick
err_best = error(target,diff(guess,δt)) # compute error
for k in range(K): # K attempts
guess = [dist() for n in range(N)] # random pick
err_guess = error(target,diff(guess,δt)) # compute error
if err_guess < err_best: best = guess # have we improved ?
return best
}
This algorithm is actually infinitely more inefficient compared to bogosort, as now we need a really really large amount of computers so that we might take advantage of a parallelized version of this method. The simple fact that now $A$ could be $ℝ$, that now we sample on some distribution on $A$ that would favor a set completely off, will help us fail almost surely to get anything reasonable.