Filtered by JavaScript, Python

Page 6

Reset

My site's now NextJS - And I (almost) regret it already

December 17, 2021
8 comments React, Django, Node, JavaScript

My personal blog was a regular Django website with jQuery (later switched to Cash) for dynamic bits. In December 2021 I rewrote it in NextJS. It was a fun journey and NextJS is great but it's really not without some regrets.

Some flashpoints for note and comparison:

React SSR is awesome

The way infinitely nested comments are rendered is isomorphic now. Before I had to code it once as a Jinja2 template thing and once as a Cash (a fork of jQuery) thing. That's the nice and the promise of JavaScript React and server-side rendering.

JS bloat

The total JS payload is now ~111KB in 16 files. It used to be ~36KB in 7 files. :(

Before

Before

After

After

Data still comes from Django

Like any website, the web pages are made up from A) getting the raw data from a database, B) rendering that data in HTML.
I didn't want to rewrite all the database queries in Node (inside getServerSideProps).

What I did was I moved all the data gathering Django code and put them under a /api/v1/ prefix publishing simple JSON blobs. Then this is exposed on 127.0.0.1:3000 which the Node server fetches. And I wired up that that API endpoint so I can debug it via the web too. E.g. /api/v1/plog/sort-a-javascript-array-by-some-boolean-operation

Now, all I have to do is write some TypeScript interfaces that hopefully match the JSON that comes from Django. For example, here's the getServerSideProps code for getting the data to this page:


const url = `${API_BASE}/api/v1/plog/`;
const response = await fetch(url);
if (!response.ok) {
  throw new Error(`${response.status} on ${url}`);
}
const data: ServerData = await response.json();
const { groups } = data;

return {
  props: {
    groups,
  },
};

I like this pattern! Yes, there are overheads and Node could talk directly to PostgreSQL but the upside is decoupling. And with good outside caching, performance never matters.

Server + CDN > static site generation

I considered full-blown static generation, but it's not an option. My little blog only has about 1,400 blog posts but you can also filter by tags and combinations of tags and pagination of combinations of tags. E.g. /oc-JavaScript/oc-Python/p3 So the total number of pages is probably in the tens of thousands.

So, server-side rendering it is. To accomplish that I set up a very simple Express server. It proxies some stuff over to the Django server (e.g. /rss.xml) and then lets NextJS handle the rest.


import next from "next";
import express from "express";

const app = next();
const handle = app.getRequestHandler();

app
  .prepare()
  .then(() => {
    const server = express();

    server.use(handle);

    server.listen(port, (err) => {
      if (err) throw err;
      console.log(`> Ready on http://localhost:${port}`);
    });
  })

Now, my site is behind a CDN. And technically, it's behind Nginx too where I do some proxy_pass in-memory caching as a second line of defense.
Requests come in like this:

  1. from user to CDN
  2. from CDN to Nginx
  3. from Nginx to Express (proxy_pass)
  4. from Express to next().getRequestHandler()

And I set Cache-Control in res.setHeader("Cache-Control", "public,max-age=86400") from within the getServerSideProps functions in the src/pages/**/*.tsx files. And once that's set, the response will be cached both in Nginx and in the CDN.

Any caching is tricky when you need to do revalidation. Especially when you roll out a new central feature in the core bundle. But I quite like this pattern of a slow-rolling upgrade as individual pages eventually expire throughout the day.

This is a nasty bug with this and I don't yet know how to solve it. Client-side navigation is dependent of hashing. So loading this page, when done with client-side navigation, becomes /_next/data/2ps5rE-K6E39AoF4G6G-0/en/plog.json (no, I don't know how that hashed URL is determined). But if a new deployment happens, the new URL becomes /_next/data/UhK9ANa6t5p5oFg3LZ5dy/en/plog.json so you end up with a 404 because you started on a page based on an old JavaScript bundle, that is now invalid.

Thankfully, NextJS handles it quite gracefully by throwing an error on the 404 so it proceeds with a regular link redirect which takes you away from the old page.

Client-side navigation still sucks. Kinda.

Next has a built-in <Link> component that you use like this:


import Link from "next/link";

...

<Link href={"/plog/" + post.oid}>
  {post.title}
</Link>

Now, clicking any of those links will automatically enable client-side routing. Thankfully, it takes care of preloading the necessary JavaScript (and CSS) simply by hovering over the link, so that when you eventually click it just needs to do an XHR request to get the JSON necessary to be able to render the page within the loaded app (and then do the pushState stuff to change the URL accordingly).

It sounds good in theory but it kinda sucks because unless you have a really good Internet connection (or could be you hit upon a CDN-cold URL), nothing happens when you click. This isn't NextJS's fault, but I wonder if it's actually horribly for users.

Yes, it sucks that a user clicks something but nothing happens. (I think it would be better if it was a button-press and not a link because buttons feel more like an app whereas links have deeply ingrained UX expectations). But most of the time, it's honestly very fast and when it works it's a nice experience. It's a great piece of functionality for more app'y sites, but less good for websites whose most of the traffic comes from direct links or Google searches.

NextJS has built-in critical CSS optimization

Critical inline CSS is critical (pun intended) for web performance. Especially on my poor site where I depend on a bloated (and now ancient) CSS framework called Semantic-UI. Without inline CSS, the minified CSS file would become over 200KB.

In NextJS, to enable inline critical CSS loading you just need to add this to your next.config.js:


    experimental: { optimizeCss: true },

and you have to add critters to your package.json. I've found some bugs with it but nothing I can't work around.

Conclusion and what's next

I'm very familiar and experienced with React but NextJS is new to me. I've managed to miss it all these years. Until now. So there's still a lot to learn. With other frameworks, I've always been comfortable that I don't actually understand how Webpack and Babel work (internally) but at least I understood when and how I was calling/depending on it. Now, with NextJS there's a lot of abstracted magic that I don't quite understand. It's hard to let go of that. It's hard to get powerful tools that are complex and created by large groups of people and understand it all too. If you're desperate to understand exactly how something works, you inevitably have to scale back the amount of stuff you're leveraging. (Note, it might be different if it's absolute core to what you do for work and hack on for 8 hours a day)

The JavaScript bundles in NextJS lazy-load quite decently but it's definitely more bloat than it needs to be. It's up to me to fix it, partially, because much of the JS code on my site is for things that technically can wait such as the interactive commenting form and the auto-complete search.

But here's the rub; my site is not an app. Most traffic comes from people doing a Google search, clicking on my page, and then bugger off. It's quite static that way and who am I to assume that they'll stay and click around and reuse all that loaded JavaScript code.

With that said; I'm going to start an experiment to rewrite the site again in Remix.

Sort a JavaScript array by some boolean operation

December 2, 2021
2 comments JavaScript

UPDATE Feb 4, 2022 Yes, as commenter Matthias pointed out, you can let JavaScript implicitly make the conversion of boolean minus boolean to end number a number of -1, 0, or 1.

Original but edited blog post below...


Imagine you have an array like this:


const items = [
  { num: 'one', labels: [] },
  { num: 'two', labels: ['foo'] },
  { num: 'three', labels: ['bar'] },
  { num: 'four', labels: ['foo'] },
  { num: 'five', labels: [] },
];

What you want, is to sort them in a way that all those entries that have a label foo come first, but you don't want to "disturb" the existing order. Essentially you want this to be the end result:

{ num: 'two', labels: ['foo'] },
{ num: 'four', labels: ['foo'] },

{ num: 'one', labels: [] },
{ num: 'three', labels: ['bar'] },
{ num: 'five', labels: [] },

Here's a way to do that:


items.sort(
  (itemA, itemB) =>
    itemB.labels.includes('foo') - itemA.labels.includes('foo')
);
console.log(items);

And the outcome is:

[
  { num: 'two', labels: [ 'foo' ] },
  { num: 'four', labels: [ 'foo' ] },
  { num: 'one', labels: [] },
  { num: 'three', labels: [ 'bar' ] },
  { num: 'five', labels: [] }
]

The simple trick is to turn then test operation into a number (0 or 1) and you can do that with Number.

Boolean minus boolean is operator overloaded to become an integer. From the Node repl...

> true - false
1
> false - true
-1
> false - false
0
> true - true
0

Brotli compression quality comparison in the real world

December 1, 2021
2 comments Node, JavaScript

At work, we use Brotli (using the Node builtin zlib) to compress these large .json files to .json.br files. When using zlib.brotliCompress you can set options to override the quality number. Here's an example of it at quality 6:


import { promisify } from 'util'
import zlib from 'zlib'
const brotliCompress = promisify(zlib.brotliCompress)

const options = {
  params: {
    [zlib.constants.BROTLI_PARAM_MODE]: zlib.constants.BROTLI_MODE_TEXT,
    [zlib.constants.BROTLI_PARAM_QUALITY]: 6,
  },
}

export async function compress(data) {
  return brotliCompress(data, options)
}

But what if you mess with that number. Surely, the files will become smaller, but at what cost? Well, I wrote a Node script that measured how long it would take to compress 6 large (~25MB each) .json file synchronously. Then, I put them into a Google spreadsheet and voila:

Size

Total size per level

Time

Total seconds per level

Miles away from rocket science but I thought it was cute to visualize as a way of understanding the quality option.

How to string pad a string in Python with a variable

October 19, 2021
1 comment Python

I just have to write this down because that's the rule; if I find myself googling something basic like this more than once, it's worth blogging about.

Suppose you have a string and you want to pad with empty spaces. You have 2 options:


>>> s = "peter"
>>> s.ljust(10)
'peter     '
>>> f"{s:<10}"
'peter     '

The f-string notation is often more convenient because it can be combined with other formatting directives.
But, suppose the number 10 isn't hardcoded like that. Suppose it's a variable:


>>> s = "peter"
>>> width = 11
>>> s.ljust(width)
'peter      '
>>> f"{s:<width}"
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: Invalid format specifier

Well, the way you need to do it with f-string formatting, when it's a variable like that is this syntax:


>>> f"{s:<{width}}"
'peter      '

How to bulk-insert Firestore documents in a Firebase Cloud function

September 23, 2021
1 comment Node, Firebase, JavaScript

You can't batch-add/bulk-insert documents in the Firebase Web SDK. But you can with the Firebase Admin Node SDK. Like, in a Firebase Cloud Function. Here's an example of how to do that:


const firestore = admin.firestore();
let batch = firestore.batch();
let counter = 0;
let totalCounter = 0;
const promises = [];
for (const thing of MANY_MANY_THINGS) {
  counter++;
  const docRef = firestore.collection("MY_COLLECTION").doc();
  batch.set(docRef, {
    foo: thing.foo,
    bar: thing.bar,
    favNumber: 0,
  });
  counter++;
  if (counter >= 500) {
    console.log(`Committing batch of ${counter}`);
    promises.push(batch.commit());
    totalCounter += counter;
    counter = 0;
    batch = firestore.batch();
  }
}
if (counter) {
  console.log(`Committing batch of ${counter}`);
  promises.push(batch.commit());
  totalCounter += counter;
}
await Promise.all(promises);
console.log(`Committed total of ${totalCounter}`);

I'm using this in a Cloud HTTP function where I can submit a large amount of data and have each one fill up a collection.

TypeScript generic async function wrapper function

September 12, 2021
0 comments JavaScript

I find this so fiddly! I love TypeScript and will continue to use it if there's a choice. But I just wanted to write a simple async function wrapper and I had to Google for it and nothing was quite right. Here's my simple solution, as an example:


function wrappedAsyncFunction<T>(
    fn: (...args: any[]) => Promise<T>
  ): (...args: any[]) => Promise<T> {
    return async function(...args: any[]) {
      console.time("Took");
      try {
        return await fn(...args);
      } catch(error) {
        console.warn("FYI, an error happened:", error);
        throw error;
      } finally {
        console.timeEnd("Took");
      }

    };
  }

Here's a Playground demo

What I use it for is to wrap my Firebase Cloud Functions so that if any error happens, I can send that error to Rollbar. In particular, here's an example of it in use:


diff --git a/functions/src/cleanup-thumbnails.ts b/functions/src/cleanup-thumbnails.ts
index 46bdb34..a3e8d54 100644
--- a/functions/src/cleanup-thumbnails.ts
+++ b/functions/src/cleanup-thumbnails.ts
@@ -2,6 +2,8 @@ import * as admin from "firebase-admin";
 import * as functions from "firebase-functions";
 import { logger } from "firebase-functions";

+import { wrappedLogError } from "./rollbar-logger";
+
 const OLD_DAYS = 30 * 6; // 6 months
 // const ADDITIONAL_DAYS_BACK = 5;
 // const ADDITIONAL_DAYS_BACK = 15;
@@ -9,7 +11,7 @@ const PREFIX = "thumbnails";

 export const scheduledCleanupThumbnails = functions.pubsub
   .schedule("every 24 hours")
-  .onRun(async () => {
+  .onRun(wrappedLogError(async () => {
     logger.debug("Running scheduledCleanupThumbnails");

...

And my wrappedLogError looks like this:


export function wrappedLogError<T>(
  fn: (...args: any[]) => Promise<T>
): (...args: any[]) => Promise<T> {
  return async function(...args: any[]) {
    try {
      return await fn(...args);
    } catch (error) {
      logError(error);
      throw error;
    }
  };
}

I'm not sure it's the best or correct way to do it, but it seems to work. Perhaps there's a more correct solution but for now I'll ship this because it seems to work fine.

TypeScript function keyword arguments like Python

September 8, 2021
0 comments Python, JavaScript

To do this in Python:


def print_person(name="peter", dob=1979):
    print(f"name={name}\tdob={dob}")


print_person() 
# prints: name=peter dob=1979

print_person(name="Tucker")
# prints: name=Tucker    dob=1979

print_person(dob=2013)
# prints: name=peter dob=2013

print_person(sex="boy")
# TypeError: print_person() got an unexpected keyword argument 'sex'

...in TypeScript:


function printPerson({
  name = "peter",
  dob = 1979
}: { name?: string; dob?: number } = {}) {
  console.log(`name=${name}\tdob=${dob}`);
}

printPerson();
// prints: name=peter    dob=1979

printPerson({});
// prints: name=peter    dob=1979

printPerson({ name: "Tucker" });
// prints: name=Tucker   dob=1979

printPerson({ dob: 2013 });
// prints: name=peter    dob=2013


printPerson({ gender: "boy" })
// Error: Object literal may only specify known properties, and 'gender' 

Here's a Playground copy of it.

It's not a perfect "transpose" across the two languages but it's sufficiently similar.
The trick is that last = {} at the end of the function signature in TypeScript which makes it possible to omit keys in the passed-in object.

By the way, the pure JavaScript version of this is:


function printPerson({ name = "peter", dob = 1979 } = {}) {
  console.log(`name=${name}\tdob=${dob}`);
}

But, unlike Python and TypeScript, you get no warnings or errors if you'd do printPerson({ gender: "boy" }); with the JavaScript version.

How I upload Firebase images optimized

September 2, 2021
0 comments JavaScript, Web development, Firebase

I have an app that allows you to upload images. The images are stored using Firebase Storage. Then, once uploaded I have a Firebase Cloud Function that can turn that into a thumbnail. The problem with this is that it takes a long time to wake up the cloud function, the first time, and generating that thumbnail. Not to mention the download of the thumbnail payload for the client. It's not unrealistic that the whole thumbnail generation plus download can take multiple (single digit) seconds. But you don't want to have the user sit and wait that long. My solution is to display the uploaded file in a <img> tag using URL.createObjectURL().

The following code is most pseudo-code but should look familiar if you're used to how Firebase and React/Preact works. Here's the FileUpload component:


interface Props {
  onUploaded: ({ file, filePath }: { file: File; filePath: string }) => void;
  onSaved?: () => void;
}

function FileUpload({
  onSaved,
  onUploaded,
}: Props) => {
  const [file, setFile] = useState<File | null>(null);

  // ...some other state stuff omitted for example.

  useEffect(() => {
    if (file) {
      const metadata = {
        contentType: file.type,
    };

    const filePath = getImageFullPath(prefix, item ? item.id : list.id, file);
    const storageRef = storage.ref();

    uploadTask = storageRef.child(filePath).put(file, metadata);
    uploadTask.on(
      "state_changed",
      (snapshot) => {
        // ...set progress percentage
      },
      (error) => {
        setUploadError(error);
      },
      () => {
        onUploaded({ file, filePath });  // THE IMPORTANT BIT!

        db.collection("pictures")
          .add({ filePath })
          .then(() => { onSaved() })

      }
    }
  }, [file])

  return (
      <input
        type="file"
        accept="image/jpeg, image/png"
        onInput={(event) => {
          if (event.target.files) {
            const file = event.target.files[0];
            validateFile(file);
            setFile(file);
          }
        }}
      />
  );
}

The important "trick" is that we call back after the storage is complete by sending the filePath and the file back to whatever component triggered this component. Now, you can know, in the parent component, that there's going to soon be an image reference with a file path (filePath) that refers to that File object.

Here's a rough version of how I use this <FileUpload> component:


function Images() {

  const [uploadedFiles, setUploadedFiles] = useState<Map<string, File>>(
    new Map()
  );

  return (<div>  
    <FileUpload
      onUploaded={({ file, filePath }: { file: File; filePath: string }) => {
        const newMap: Map<string, File> = new Map(uploadedFiles);
        newMap.set(filePath, file);
        setUploadedFiles(newMap);
      }}
      />

    <ListUploadedPictures uploadedFiles={uploadedFiles}/>
    </div>
  );
}

function ListUploadedPictures({ uploadedFiles}: {uploadedFiles: Map<string, File>}) {

  // Imagine some Firebase Firestore subscriber here
  // that watches for uploaded pictures. 
  return <div>
    {pictures.map(picture => (
      <Picture picture={picture} uploadedFiles={uploadedFiles} />
    ))}
  </div>
}

function Picture({ 
  uploadedFiles,
  picture,
}: {
  uploadedFiles: Map<string, File>;
  picture: {
    filePath: string;
  }
}) {
  const thumbnailURL = getThumbnailURL(filePath, 500);
  const [loaded, setLoaded] = useState(false);

  useEffect(() => {
    const preloadImg = new Image();
    preloadImg.src = thumbnailURL;

    const callback = () => {
      if (mounted) {
        setLoaded(true);
      }
    };
    if (preloadImg.decode) {
      preloadImg.decode().then(callback, callback);
    } else {
      preloadImg.onload = callback;
    }

    return () => {
      mounted = false;
    };
  }, [thumbnailURL]);

  return <img
    style={{
      width: 500,
      height: 500,
      "object-fit": "cover",
    }}
    src={
      loaded
        ? thumbnailURL
        : file
        ? URL.createObjectURL(file)
        : PLACEHOLDER_IMAGE
    }
  />
}

Phew! That was a lot of code. Sorry about that. But still, this is just a summary of the real application code.

The point is that; I send the File object back to the parent component immediately after having uploaded it to Firebase Cloud Storage. Then, having access to that as a File object, I can use that as the thumbnail while I wait for the real thumbnail to come in. Now, it doesn't matter that it takes 1-2 seconds to wake up the cloud function and 1-2 seconds to perform the thumbnail creation, and then 0.1-2 seconds to download the thumbnail. All the while this is happening you're looking at the File object that was uploaded. Visually, the user doesn't even notice the difference. If you refresh the page, that temporary in-memory uploadedFiles (Map instance) is empty so you're now relying on the loading of the thumbnail which should hopefully, at this point, be stored in the browser's native HTTP cache.

The other important part of the trick is that we're using const preloadImg = new Image() for loading the thumbnail. And by relying on preloadImage.decode ? preloadImage.decode().then(...) : preload.onload = ... we can be informed only when the thumbnail has been successfully created and successfully downloaded to make the swap.

How to fadeIn and fadeOut like jQuery but with Cash

August 24, 2021
0 comments JavaScript

Remember jQuery? Yeah, it was great. But it was also horrible in its own ways but only when compared to the more powerful tools that we have now as of 2021. I still (almost) use it here on my site. Atually, I use "fork" of jQuery called Cash which calls itself: "An absurdly small jQuery alternative for modern browsers."
Cash is written in TypeScript, which gives me peace of mind, and as a JS bundle, it's only 19KB minified (5.3KB Brotli compressed) whereas jQuery is 87KB minified (27KB Brotli compressed).

But something that jQuery has, that Cash doesn't, is animations. E.g. $('myselector').fadeIn(). If you need to do this with Cash you can use the following pure JavaScript solution:


// Example implementation

const msg = $('<div class="message">')
  .text(`Random message: ${Math.random()}`)
  .css("opacity", 0)
  .css("transition", "opacity 600ms")
  .prependTo($("#root"));
setTimeout(() => msg.css("opacity", 1), 0);

setTimeout(() => {
  msg.css("transition", "opacity 1000ms").css("opacity", 0);
  setTimeout(() => msg.remove(), 1000);
}, 3000);

What this application demonstrates is the creation of a <div> that's immediately injected into the DOM but slowly fades into view. And 3 seconds later it fades out and is removed. Full demo/sample application here.

Sample application using cash like jQuery's $.fadeIn().

The point of the demo is how you can cause the fade-in effect with just Cash but still relies on CSS for the actual animation.
The trick is to, ultimately, create it first like this:


<div class="message" style="opacity:0; transition: opacity 600ms">
  Random message: 0.6517198324628395
</div>

and then, right after it's been added to the DOM, change the style=... to:


-<div class="message" style="opacity:0; transition: opacity 600ms">
+<div class="message" style="opacity:1; transition: opacity 600ms">

What's neat about this is that you use the transition shortcut so it's done entirely with CSS instead of a requestAnimationFrame and/or while-loop like jQuery's effects.js does it.

Note! This is not a polyfill since jQuery's fadeIn() (etc.) can do a lot more such as callbacks. The example might not be great but I hope this little solution becomes useful for someone else who needs this.