A utility to produce a JSON representation of the key front matter and contents of Hugo documents. It's main intent is to produce JSON to be used by Lunr (and Lunr-like packages) to support search on a static Hugo site. It's designed to be a fast and modern alternative to the now unsupported hugo_lunr Node tool.
Pull requests are welcome. A list of goals and work to be done is available in ToDo.txt
.
It currently supports .md
files and both YAML and TOML front matter.
hugo_to_json HUGO_CONTENT_DIRECTORY OUTPUT_LOCATION
Example usage is shown below.
hugo_to_json example/blog/content example/blog/static/index.json
Defaults to ./content
for the content directory and ./static/index.json
for the index output.
If you want to use the latest version of this tool as part of a CI build process the following script should work.
```bash
set -e
LATEST_RELEASE=$(curl -L -s -H 'Accept: application/json' https://github.com/arranf/HugoLunr/releases/latest)
LATESTVERSION=$(echo $LATESTRELEASE | sed -e 's/."tag_name":"([^"])".*/\1/') ARTIFACTURL="https://github.com/arranf/HugoToJSON/releases/download/$LATESTVERSION/hugotojson" INSTALLDIRECTORY="." INSTALLNAME="hugotojson" DOWNLOADFILE="$INSTALLDIRECTORY/$INSTALL_NAME"
echo "Fetching $ARTIFACTURL.." if test -x "$(command -v curl)"; then code=$(curl -s -w '%{httpcode}' -L "$ARTIFACTURL" -o "$DOWNLOADFILE") elif test -x "$(command -v wget)"; then code=$(wget -q -O "$DOWNLOADFILE" --server-response "$ARTIFACTURL" 2>&1 | awk '/^ HTTP/{print $2}' | tail -1) else echo "Neither curl nor wget was available to perform http requests." exit 1 fi
if [ "$code" != 200 ]; then echo "Request failed with code $code" exit 1 fi
chmod +x "$DOWNLOAD_FILE" ```