Filtered by macOS

Page 2

Reset

Build pyenv Python versions on macOS Catalina 10.15

February 19, 2020
9 comments Python, macOS

UPDATE Mar 7, 2022: For OSX 12.2 Monterey

Here's what I needed to do in 2022 to get this to work:

SDKROOT=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.1.sdk \
  MACOSX_DEPLOYMENT_TARGET=12.2 \
  PYTHON_CONFIGURE_OPTS="--enable-framework" \
  pyenv install 3.10.2

BELOW IS ORIGINAL BLOG POST

I'm still working on getting pyenv in my bloodstream. It seems like totally the right tool for having different versions of Python available on macOS that don't suddenly break when you run brew upgrade periodically. But every thing I tried failed with an error similar to this:

python-build: use openssl from homebrew
python-build: use readline from homebrew
Installing Python-3.7.0...
python-build: use readline from homebrew

BUILD FAILED (OS X 10.15.x using python-build 20XXXXXX)

Inspect or clean up the working tree at /var/folders/mw/0ddksqyn4x18lbwftnc5dg0w0000gn/T/python-build.20190528163135.60751
Results logged to /var/folders/mw/0ddksqyn4x18lbwftnc5dg0w0000gn/T/python-build.20190528163135.60751.log

Last 10 log lines:
./Modules/posixmodule.c:5924:9: warning: this function declaration is not a prototype [-Wstrict-prototypes]
    if (openpty(&master_fd, &slave_fd, NULL, NULL, NULL) != 0)
        ^
./Modules/posixmodule.c:6018:11: error: implicit declaration of function 'forkpty' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
    pid = forkpty(&master_fd, NULL, NULL, NULL);
          ^
./Modules/posixmodule.c:6018:11: warning: this function declaration is not a prototype [-Wstrict-prototypes]
2 warnings and 2 errors generated.
make: *** [Modules/posixmodule.o] Error 1
make: *** Waiting for unfinished jobs....

I read through the Troubleshooting FAQ and the "Common build problems" documentation. xcode was up to date and I had all the related brew packages upgraded. Nothing seemed to work.

Until I saw this comment on an open pyenv issue: "Unable to install any Python version on MacOS"

All I had to do was replace the 10.14 for 10.15 and now it finally worked here on Catalina 10.15. So, the magical line was this:

SDKROOT=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk \
MACOSX_DEPLOYMENT_TARGET=10.15 \
PYTHON_CONFIGURE_OPTS="--enable-framework" \
pyenv install -v 3.7.6

Hopefully, by blogging about it you'll find this from Googling and I'll remember the next time I need it because it did eat 2 hours of precious evening coding time.

"ld: library not found for -lssl" trying to install mysqlclient in Python on macOS

February 5, 2020
1 comment Python, macOS

I don't know how many times I've encountered this but by blogging about it, hopefully, next time it'll help me, and you!, find this sooner.

If you get this:

clang -bundle -undefined dynamic_lookup -L/usr/local/opt/readline/lib -L/usr/local/opt/readline/lib -L/Users/peterbe/.pyenv/versions/3.8.0/lib -L/opt/boxen/homebrew/lib -L/usr/local/opt/readline/lib -L/usr/local/opt/readline/lib -L/Users/peterbe/.pyenv/versions/3.8.0/lib -L/opt/boxen/homebrew/lib -L/opt/boxen/homebrew/lib -I/opt/boxen/homebrew/include build/temp.macosx-10.14-x86_64-3.8/MySQLdb/_mysql.o -L/usr/local/Cellar/mysql/8.0.18_1/lib -lmysqlclient -lssl -lcrypto -o build/lib.macosx-10.14-x86_64-3.8/MySQLdb/_mysql.cpython-38-darwin.so
    ld: library not found for -lssl
    clang: error: linker command failed with exit code 1 (use -v to see invocation)
    error: command 'clang' failed with exit status 1

(The most important line is the ld: library not found for -lssl)

On most macOS systems, when trying to install a Python package that requires a binary compile step based on the system openssl (which I think comes from the OS), you'll get this.

The solution is simple, run this first:


export LDFLAGS="-L/usr/local/opt/openssl/lib"
export CPPFLAGS="-I/usr/local/opt/openssl/include"

Depending on your install of things, you might need to adjust this accordingly. For me, I have:

ls -l /usr/local/opt/openssl/
total 1272
-rw-r--r--   1 peterbe  staff     717 Sep 10 09:13 AUTHORS
-rw-r--r--   1 peterbe  staff  582924 Dec 19 11:32 CHANGES
-rw-r--r--   1 peterbe  staff     743 Dec 19 11:32 INSTALL_RECEIPT.json
-rw-r--r--   1 peterbe  staff    6121 Sep 10 09:13 LICENSE
-rw-r--r--   1 peterbe  staff   42183 Sep 10 09:13 NEWS
-rw-r--r--   1 peterbe  staff    3158 Sep 10 09:13 README
drwxr-xr-x   4 peterbe  staff     128 Dec 19 11:32 bin
drwxr-xr-x   3 peterbe  staff      96 Sep 10 09:13 include
drwxr-xr-x  10 peterbe  staff     320 Sep 10 09:13 lib
drwxr-xr-x   4 peterbe  staff     128 Sep 10 09:13 share

Now, with those things set you should hopefully be able to do things like:

pip install mysqlclient

Experimenting with Nginx worker_processes

February 14, 2019
0 comments Web development, Nginx, macOS, Linux

I have Nginx 1.15.8 installed with Homebrew on my macOS. By default the /usr/local/etc/nginx/nginx.conf it set to...:

worker_processes  1;

But, from the documentation, it says:

"The optimal value depends on many factors including (but not limited to) the number of CPU cores, the number of hard disk drives that store data, and load pattern. When one is in doubt, setting it to the number of available CPU cores would be a good start (the value “auto” will try to autodetect it)." (bold emphasis mine)

What is the ideal number for me? The performance of Nginx on my laptop doesn't really matter. But for my side-projects it's important to have a fast Nginx since it serves static HTML and lots of static assets. However, on my personal servers I have a bunch of other resource hungry stuff going on that I know is more likely to need the resources, like Elasticsearch and uwsgi.

To figure this out, I wrote a benchmark program that requested a small index.html about 10,000 times across 10 concurrent clients with hey.

hey -n 10000 -c 10 http://peterbecom.local/plog/variable_cache_control/awspa

I ran this 10 times between changing the worker_processes in the nginx.conf file. Here's the output:

1 WORKER PROCESSES
BEST  : 13,607.24 reqs/s

2 WORKER PROCESSES
BEST  : 17,422.76 reqs/s

3 WORKER PROCESSES
BEST  : 18,886.60 reqs/s

4 WORKER PROCESSES
BEST  : 19,417.35 reqs/s

5 WORKER PROCESSES
BEST  : 19,094.18 reqs/s

6 WORKER PROCESSES
BEST  : 19,855.32 reqs/s

7 WORKER PROCESSES
BEST  : 19,824.86 reqs/s

8 WORKER PROCESSES
BEST  : 20,118.25 reqs/s

Or, as a graph:

Graph

Now note, this is done here on my MacBook Pro. Not on my Ubuntu DigitalOcean servers. For now, I just want to get a feeling for how these numbers correlate.

Conclusion

The benchmark isn't good enough. The numbers are pretty stable but I'm doing this on my laptop with multiple browsers idling, Slack, and Spotify running. Clearly, the throughput goes up a bit when you allocate more workers but if anything can be learned from this, start with going beyond 1 for a quick fix and from there start poking and more exhaustive benchmarks. And don't forget, if you have time to go deeper on this, to look at the combination of worker_connections and worker_processes.

How to encrypt a file with Emacs on macOS (ccrypt)

January 29, 2019
0 comments macOS, Linux

Suppose you have a cleartext file that you want to encrypt with a password, here's how you do that with ccrypt on macOS. First:


▶ brew install ccrypt

Now, you have the ccrypt program. Let's test it:

cat secrets.txt
Garage pin: 123456
Favorite kid: bart
Wedding ring order no: 98c4de910X

▶ ccrypt secrets.txt
Enter encryption key: ▉▉▉▉▉▉▉▉▉▉▉
Enter encryption key: (repeat) ▉▉▉▉▉▉▉▉▉▉▉

# Note that the original 'secrets.txt' is replaced 
# with the '.cpt' version.ls | grep secrets
secrets.txt.cpt

▶ less secrets.txt.cpt
"secrets.txt.cpt" may be a binary file.  See it anyway?

There. Now you can back up that file on Dropbox or whatever and not have to worry about anybody being able to open it without your password. To read it again:


▶ ccrypt --decrypt --cat secrets.txt.cpt
Enter decryption key: ▉▉▉▉▉▉▉▉▉▉▉
Garage pin: 123456
Favorite kid: bart
Wedding ring order no: 98c4de910X

▶ ls | grep secrets
secrets.txt.cpt

Or, to edit it you can do these steps:


▶ ccrypt --decrypt secrets.txt.cpt
Enter decryption key: ▉▉▉▉▉▉▉▉▉▉▉


▶ vi secrets.txt

▶ ccrypt secrets.txt
Enter encryption key:
Enter encryption key: (repeat)

Clunky that you have you extract the file and remember to encrypt it back again. That's where you can use emacs. Assuming you have emacs already installed and you have a ~/.emacs file. Add these lines to your ~/.emacs:


(setq auto-mode-alist
 (append '(("\\.cpt$" . sensitive-mode))
               auto-mode-alist))
(add-hook 'sensitive-mode (lambda () (auto-save-mode nil)))
(setq load-path (cons "/usr/local/share/emacs/site-lisp/ccrypt" load-path))
(require 'ps-ccrypt "ps-ccrypt.el")

By the way, how did I know that the load path should be /usr/local/share/emacs/site-lisp/ccrypt? I looked at the output from brew:


▶ brew info ccrypt
ccrypt: stable 1.11 (bottled)
Encrypt and decrypt files and streams
...
==> Caveats
Emacs Lisp files have been installed to:
  /usr/local/share/emacs/site-lisp/ccrypt
...

Anyway, now I can use emacs to open the secrets.txt.cpt file and it will automatically handle the password stuff:

About to open
About to open

Opening
Opening with password

Opened
Opened

This is really convenient. Now you can open an encrypted file, type in your password, and it will take care of encrypting it for you when you're done (saving the file).

Be warned! I'm not an expert at either emacs or encryption so just be careful and if you get nervous take precaution and set aside more time to study this deeper.

elapsed function in bash to print how long things take

December 12, 2018
0 comments macOS, Linux

I needed this for a project and it has served me pretty well. Let's jump right into it:


# This is elapsed.sh

SECONDS=0

function elapsed()
{
  local T=$SECONDS
  local D=$((T/60/60/24))
  local H=$((T/60/60%24))
  local M=$((T/60%60))
  local S=$((T%60))
  (( $D > 0 )) && printf '%d days ' $D
  (( $H > 0 )) && printf '%d hours ' $H
  (( $M > 0 )) && printf '%d minutes ' $M
  (( $D > 0 || $H > 0 || $M > 0 )) && printf 'and '
  printf '%d seconds\n' $S
}

And here's how you use it:


# Assume elapsed.sh to be in the current working directory
source elapsed.sh

echo "Doing some stuff..."
# Imagine it does something slow that
# takes about 3 seconds to complete.
sleep 3
elapsed

echo "Some quick stuff..."
sleep 1
elapsed

echo "Doing some slow stuff..."
sleep 61
elapsed

The output of running that is:

Doing some stuff...
3 seconds
Some quick stuff...
4 seconds
Doing some slow stuff...
1 minutes and 5 seconds

Basically, if you have a bash script that does a bunch of slow things, it having a like of elapsed there after some blocks of code will print out how long the script has been running.

It's not beautiful but it works.

How I performance test PostgreSQL locally on macOS

December 10, 2018
2 comments Web development, macOS, PostgreSQL

It's weird to do performance analysis of a database you run on your laptop. When testing some app, your local instance probably has 1/1000 the amount of realistic data compared to a production server. Or, you're running a bunch of end-to-end integration tests whose PostgreSQL performance doesn't make sense to measure.

Anyway, if you are doing some performance testing of an app that uses PostgreSQL one great tool to use is pghero. I use it for my side-projects and it gives me such nice insights into slow queries that I'm willing to live with the cost that it is to run it on a production database.

This is more of a brain dump of how I run it locally:

First, you need to edit your postgresql.conf. Even if you used Homebrew to install it, it's not clear where the right config file is. Start psql (on any database) and type this to find out which file is the one:


$ psql kintobench

kintobench=# show config_file;
               config_file
-----------------------------------------
 /usr/local/var/postgres/postgresql.conf
(1 row)

Now, open /usr/local/var/postgres/postgresql.conf and add the following lines:

# Peterbe: From Pghero's configuration help.
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all

Now, to restart the server use:


▶ brew services restart postgresql
Stopping `postgresql`... (might take a while)
==> Successfully stopped `postgresql` (label: homebrew.mxcl.postgresql)
==> Successfully started `postgresql` (label: homebrew.mxcl.postgresql)

The next thing you need is pghero itself and it's easy to run in docker. So to start, you need Docker for mac installed. You also need to know the database URL. Here's how I ran it:

docker run -ti -e DATABASE_URL=postgres://peterbe:@host.docker.internal:5432/kintobench -p 8080:8080 ankane/pghero

Duplicate indexes

Note the trick of peterbe:@host.docker.internal because I don't use a password but inside the Docker container it doesn't know my terminal username. And the host.docker.internal is so the Docker container can reach the PostgreSQL installed on the host.

Once that starts up you can go to http://localhost:8080 in a browser and see a listing of all the cumulatively slowest queries. There are other cool features in pghero too that you can immediately benefit from such as hints about unused/redundent database indices.

Hope it helps!

The best grep tool in the world; ripgrep

June 19, 2018
3 comments Linux, Web development, macOS

tl;dr; ripgrep (aka. rg) is the best tool to grep today.

ripgrep is a tool for searching files. Its killer feature is that it's fast. Like, really really fast. Faster than sift, git grep, ack, regular grep etc.

If you don't believe me, either read this detailed blog post from its author or just jump straight to the conclusion:

  • For both searching single files and huge directories of files, no other tool obviously stands above ripgrep in either performance or correctness.

  • ripgrep is the only tool with proper Unicode support that doesn’t make you pay dearly for it.

  • Tools that search many files at once are generally slower if they use memory maps, not faster.

Benchmark
Benchmark

I used to use git grep whenever I was inside a git repo and sift for everything else. That alone, was a huge step up from regular grep. Granted, almost all my git repos are small enough that regular git grep is faster than I can perceive many times. But with ripgrep I can just add --no-ignore-vcs and it searches in all the files mentioned in .gitignore too. That's useful when you want to search in your own source as well as the files in node_modules.

The installation instructions are easy. I installed it with brew install ripgrep and the best way to learn how to use it is rg --help. Remember that it has a lot of cool features that are well worth learning. It's written in Rust and so far I haven't had a single crash, ever. The ability to search by file type gets some getting used to (tip! use: rg --type-list) and remember that you can pipe rg output to another rg. For example, to search for all lines that contain query and string you can use rg query | rg string.

How to unset aliases set by Oh My Zsh

June 14, 2018
5 comments Linux, macOS

I use Oh My Zsh and I highly recommend it. However, it sets some aliases that I don't want. In particular, there's a plugin called git.plugin.zsh (located in ~/.oh-my-zsh/plugins/git/git.plugin.zsh) that interfers with a global binary I have in $PATH. So when I start a shell the executable gg becomes...:

which gg
gg: aliased to git gui citool

That overrides /usr/local/bin/gg which is the one I want to execute when I type gg. To unset that I can run...:

unset gg

▶ which gg
/usr/local/bin/gg

To override it "permanently", I added, to the end of ~/.zshrc:


# This unsets ~/.oh-my-zsh/plugins/git/git.plugin.zsh
# So my /usr/local/bin/gg works instead
unalias gg

Now whenever I start a new terminal, it defaults to the gg in /usr/local/bin/gg instead.

gtop is best

May 2, 2018
0 comments Linux, macOS, JavaScript

To me, using top inside a Linux server via SSH is all muscle-memory and it's definitely good enough. On my Macbook when working on some long-running code that is resource intensive the best tool I know of is: gtop

gtop in action
gtop in action

I like it because it has the graphs I want and need. It splits up the work of each CPU which is awesome. That's useful for understanding how well a program is able to leverage more than one CPU process.

And it's really nice to have the list of Processes there to be able to quickly compare which programs are running and how that might affect the use of the CPUs.

Instead of listing alternatives I've tried before, hopefully this Reddit discussion has good links to other alternatives

Make .local domains NOT slow in macOS

January 29, 2018
19 comments Linux, macOS

Problem

I used to have a bunch of domains in /etc/hosts like peterbecom.dev for testing Nginx configurations locally. But then it became impossible to test local sites in Chrome because an .dev is force redirected to HTTPS. No problem, so I use .local instead. However, DNS resolution was horribly slow. For example:


▶ time curl -I http://peterbecom.local/about/minimal.css > /dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0  1763    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
curl -I http://peterbecom.local/about/minimal.css > /dev/null  0.01s user 0.01s system 0% cpu 5.585 total

5.6 seconds to open a local file in Nginx.

Solution

Here's that one weird trick to solve it: Add an entry for IPv4 AND IPv6 in /etc/hosts.

So now I have:

cat /etc/hosts | grep peterbecom
127.0.0.1       peterbecom.local
::1             peterbecom.local

Verification

Ah! Much better. Thing are fast again:


▶ time curl -I http://peterbecom.local/about/minimal.css > /dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0  1763    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl -I http://peterbecom.local/about/minimal.css > /dev/null  0.01s user 0.01s system 37% cpu 0.041 total

0.04 seconds instead of 5.6.