@tex
macro for generating LaTeX PDFs from Julia code with descriptions. Requires pdflatex
.
We recommend MiKTeX or TeX Live.
The following Julia code will produce the default.pdf
file shown below.
using TeX
doc = TeXDocument("default") # PDF file name
doc.title = T"Simple \TeX.jl Example: \texttt{@tex}" # Use T"..." to escape TeX strings (raw"..." works too)
doc.author = "Robert Moss"
doc.address = "Stanford University, Stanford, CA 94305"
doc.email = "mossr@cs.stanford.edu"
doc.date = T"\today"
addpackage!(doc, "url")
@tex doc T"In mathematical optimization, statistics, decision theory and machine learning,
a \textit{loss function} or \textit{cost function} is a function that maps an event or
values of one or more variables onto a real number intuitively representing some ``cost''
associated with the event.\footnote{\url{https://en.wikipedia.org/wiki/Loss_function}}
An optimization problem seeks to minimize a loss function. An objective function is
either a loss function or its negative (sometimes called a \textit{reward function}
or a \textit{utility function}), in which case it is to be maximized.
\begin{equation}
J(\theta) = \frac{1}{m}\sum_{i=1}^{m}\biggl[ -y_i \log(h_{\theta}(x_i)) -
(1 - y_i) \log(1 - h_{\theta}(x_i)) \biggr]
\end{equation}" ->
function loss_function(theta, X, y)
m = length(y) # number of training examples
grad = zeros(size(theta))
h = sigmoid(X * theta)
J = 1/m * sum((-y'*log(h))-(1 .- y)'*log(1 .- h))
grad = 1/m*(X'*(h-y))
return (J, grad)
end
texgenerate(doc) # Compile the document to PDF
The output PDF is generated using pdflatex
.
The PDF includes title cased function names as sections with descriptions above the Julia function in a lstlisting
environment.
Multiple functions with @tex
can be used in the same file, see the Advanced Example below.
Extending the same example as above, we can change the style of the document:
- JMLR style:
-
doc.jmlr = true doc.title = T"JMLR \TeX.jl Example" texgenerate(doc)
-
- IEEETran style:
-
doc.ieee = true doc.title = T"IEEE \TeX.jl Example" texgenerate(doc)
-
- Tufte style:
-
doc.tufte = true doc.title = T"Tufte \TeX.jl Example" texgenerate(doc)
-
Click to view PDFs
Default | JMLR |
---|---|
IEEE | Tufte |
---|---|
The Tufte style will run slower (hence, optional) and uses lualatex
and pythontex
.
Using pythontex
provides better syntax highlighting support.
The output PDF uses the algorithm
and juliaverbatim
environments included in a custom tufte-writeup.cls
.
This package also integrates with PGFPlots.jl. A simple plotting example is shown below and outputs the pgfplots.pdf
file.
using TeX
doc = TeXDocument("pgfplots"; title=T"\TeX.jl Example using PGFPlots.jl")
addpdfplots!(doc)
@tex doc "The following Julia code produces the plot below." ->
begin
using PGFPlots
x = [1,2,3]
y = [2,4,1]
p = Plots.Linear(x, y)
addplot!(doc, p)
end
texgenerate(doc)
This package also integrates with Latexify.jl.
Using the @texeq
macro, we can translate Julia expressions to LaTeX equations while simultaneously evaluating them.
This also gives us the ability to use the output from Julia expressions directly in LaTeX (using string interpolation and the tex
function).
An example is shown below and outputs the quadratic.pdf
file.
using TeX
doc = globaldoc("quadratic"; build_dir="output_quadratic", auto_sections=false, jmlr=true)
addpackage!(doc, "url")
doc.title = T"""
Quadratic Formula \TeX.jl Example: \texttt{@texeq}%
\thanks{Julia-to-\LaTeX~expression conversions done using
Latexify.jl: \protect\url{https://github.com/korsbo/Latexify.jl}}
"""
@tex T"""
\section{Quadratic formula and its derivation}
Completing the square can be used to derive a general formula for solving quadratic equations,
called the \textit{quadratic formula}.\footnote{\url{https://en.wikipedia.org/wiki/Quadratic_equation}}
The mathematical proof will now be briefly summarized. It can easily be seen, by polynomial expansion,
that the following equation is equivalent to the quadratic equation:
"""
# using @texeqn does not evaluate (because this is not a valid Julia assignment)
@texeqn (x + b/2a)^2 = (b^2 - 4ac) / 4a^2
@tex T"""
Taking the square root of both sides, and isolating $x = \mathrm{quad}(a,b,c)$, gives:
"""
@texeq quad(a,b,c) = (-b ± sqrt(b^2 - 4a*c)) ./ 2a
@tex T"""
\section{Examples}
Using the following definition of the plus-minus function that returns a \texttt{Tuple}
in $\R^2$, we can find the roots of a few examples.
""" ->
±(a,b) = (a+b, a-b)
@tex "\\begin{itemize}\n"
ABC = [(1, 5, -14), (1, -5, -24), (1, 3, -10)]
for (a,b,c) in ABC
tex("\\item Let \$a=$a, b=$b, c=$c\$. The quadratic formula gives us the roots \$$(Int.(quad(a,b,c)))\$.\n")
end
@tex "\\end{itemize}"
@attachfile! # embed this source file as a footnote.
texgenerate()
A full showcase of the @tex
macro using PGFPlots.jl and the Tufte-style class is shown below (producing the ml.pdf
file). Note that the margin figures are generated using the exact Julia code shown within the document.
using TeX
using PGFPlots
using LinearAlgebra
doc = globaldoc("ml"; tufte=true)
doc.title = "Loss Functions in Machine Learning"
doc.author = "Robert Moss"
doc.address = "Stanford University, Stanford, CA 94305"
doc.email = "mossr@cs.stanford.edu"
doc.date = T"\today"
doc.auto_sections = false # do not create new \sections for @tex'd functions
@tex begin
𝕀(b) = b ? 1 : 0 # indicator function
margin(x, y, 𝐰, φ) = (𝐰⋅φ(x))*y
end
@tex T"""\section{Zero-One Loss}
The \textit{zero-one loss} corresponds exactly to the notion of whether our
predictor made a mistake or not. We can also write the loss in terms of the margin.
Plotting the loss as a function of the margin, it is clear that the loss is $1$
when the margin is negative and $0$ when it is positive.
\[
\ZeroOneLoss(x, y, \w) =
\mathbb{1}[\underbrace{(\vec{w} \cdot \phi(x)) y}_{\rm margin} \le 0]
\]
""" ->
Loss_01(x, y, 𝐰, φ) = 𝕀(margin(x, y, 𝐰, φ) ≤ 0)
plot_01 = Plots.Linear(x->Loss_01(x, 1, [1], x->x), (-3,3), xbins=1000,
style="solid, ultra thick, mark=none, red",
legendentry=L"\ZeroOneLoss")
ax = Axis([plot_01],
ymin=0, ymax=4,
xlabel=L"{\rm margin}~(\mathbf{w}\cdot\phi(x))y",
ylabel=L"\Loss(x,y,\mathbf{w})",
style="ymajorgrids, enlarge x limits=0, ylabel near ticks",
legendPos="north west",
legendStyle="{at={(0.5,-0.5)},anchor=north}",
width="5cm", height="4cm")
addplot!(ax; figure=true, figtype="marginfigure", figure_pos="-6cm",
caption="\\textit{Zero-one loss}.", caption_pos=:above)
@tex T"""\section{Hinge Loss (SVMs)}
Hinge loss upper bounds $\ZeroOneLoss$ and has a non-trivial gradient.
The intuition is we try to increase the margin if it is less than $1$.
Minimizing upper bounds are a general idea; the hope is that pushing
down the upper bound leads to pushing down the actual function.
\[
\HingeLoss(x, y, \w) = \max\{1 - (\w \cdot \phi(x)) y, 0 \}
\]
""" ->
Loss_hinge(x, y, 𝐰, φ) = max(1 - margin(x, y, 𝐰, φ), 0)
plot_hinge = Plots.Linear(x->Loss_hinge(x, +1, [1], x->x), (-3,3),
style="solid, ultra thick, mark=none, darkgreen",
legendentry=L"\HingeLoss")
ax.plots = [plot_01, plot_hinge]
addplot!(ax; figure=true, figtype="marginfigure", figure_pos="-6cm",
caption="\\textit{Hinge loss}.", caption_pos=:above)
@tex T"""\section{Logistic Loss}
Another popular loss function is the \textit{logistic loss}.
The intuition is we try to increase the margin even when it already exceeds $1$.
The main property of the logistic loss is no matter how correct your prediction is,
you will have non-zero loss. Thus, there is still an incentive (although diminishing)
to increase the margin. This means that you'll update on every single example.
\[
\LogisticLoss(x, y, \w) = \log(1 + e^{-(\w \cdot \phi(x)) y})
\]
""" ->
Loss_logistic(x, y, 𝐰, φ) = log(1 + exp(-margin(x, y, 𝐰, φ)))
plot_logistic = Plots.Linear(x->Loss_logistic(x, +1, [1], x->x), (-3,3),
style="solid, ultra thick, mark=none, sun",
legendentry=L"\LogisticLoss")
ax.plots = [plot_01, plot_hinge, plot_logistic]
addplot!(ax; figure=true, figtype="marginfigure", figure_pos="-6cm",
caption="\\textit{Logistic loss}.", caption_pos=:above)
# note, content from CS221 at Stanford
texgenerate()
For other examples, see the test files inside the test/
directory.
] add https://github.com/mossr/TeX.jl
These steps are only required if you set doc.tufte=true
.
This requires lualatex
and pythontex
.
You can download the latest version of pythontex from https://github.com/gpoore/pythontex.
Compile the style:
cd style
sudo python setup.py install
cd ..
Compile the lexer:
cd lexer
sudo python setup.py install
cd ..