I've a problem that Jupyter can't see env variable in bashrc file. Is there a way to load these variables in jupyter or add custom variables to it?
12 Answers
To set an env variable in a jupyter notebook, just use a %
magic commands, either %env
or %set_env
, e.g., %env MY_VAR=MY_VALUE
or %env MY_VAR MY_VALUE
. (Use %env
by itself to print out current environmental variables.)
See: http://ipython.readthedocs.io/en/stable/interactive/magics.html
-
5I deleted my earlier comments as they weren't quite accurate - but note that the %env and %set_env magic commands use
os.environ[var] = val
on the backend: github.com/ipython/ipython/blob/master/IPython/core/magics/…– evan_bJun 22, 2018 at 20:23 -
6@michael Is there any way to persist the environment across all notebooks? Setting the environment this way seems to only persist the environment for the current notebook. Mar 7, 2019 at 18:01
-
1
-
@questionto42 I absolutely did not in any way mean to imply that
set_env
was at all "like" a!shell
command, on the contrary, I said exactly the opposite, that these settings do not somehow transcend the notebook page itself & do NOT actually set an OS env var (hence will NOT be there for subsequent!shell
commands), but my comment was in response to a now deleted comment & the context is lost. Like the comment I replied to, I'll delete my comment to avoid confusion.– michaelJan 26 at 2:56 -
You can also set the variables in your kernel.json
file:
My solution is useful if you need the same environment variables every time you start a jupyter kernel, especially if you have multiple sets of environment variables for different tasks.
To create a new ipython kernel with your environment variables, do the following:
- Read the documentation at https://jupyter-client.readthedocs.io/en/stable/kernels.html#kernel-specs
- Run
jupyter kernelspec list
to see a list with installed kernels and where the files are stored. - Copy the directory that contains the kernel.json (e.g. named
python2
) to a new directory (e.g.python2_myENV
). - Change the
display_name
in the newkernel.json
file. - Add a
env
dictionary defining the environment variables.
Your kernel json could look like this (I did not modify anything from the installed kernel.json except display_name
and env
):
{
"display_name": "Python 2 with environment",
"language": "python",
"argv": [
"/usr/bin/python2",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"env": {"LD_LIBRARY_PATH":""}
}
Use cases and advantages of this approach
- In my use-case, I wanted to set the variable
LD_LIBRARY_PATH
which effects how compiled modules (e.g. written in C) are loaded. Setting this variable using%set_env
did not work. - I can have multiple python kernels with different environments.
- To change the environment, I only have to switch/ restart the kernel, but I do not have to restart the jupyter instance (useful, if I do not want to loose the variables in another notebook). See -however - https://github.com/jupyter/notebook/issues/2647
-
1Can you please advise me how do I add
C:\Program Files (x86)\Graphviz2.38\bin\dot.exe
to existing system path using your suggested technique? Will it work if I am not using admin account? I am using Windows 10. Mar 5, 2019 at 10:56 -
Exactly what I needed. Homebrew's Python overwrites
sys.executable
unlessPYTHONEXECUTABLE
is set beforehand, which you have to set before python runs. Jul 17, 2019 at 11:43 -
In my opinion this is the only correct answer, because it uses only Jupyter itself, rather than depending on the functionality being available in any specific kernel. Aug 27, 2020 at 12:46
-
2This works for me, but remember to add the correct value you want for the environment variable. The way it is written (at least for me in Jupyter Lab) just erases the variable alltogether. For example, to be able to run Tensorflow on a Jupyter Notebook, I used:
"env": {"LD_LIBRARY_PATH":"/opt/miniconda3/envs/tensorflow/lib:${LD_LIBRARY_PATH}"}
. Feb 10, 2022 at 13:56 -
1I had some problems making VS Code to use the updated
kernel.json
file. What finally worked for me was to create an.env
file in the Workspace root and setLD_LIBRARY_PATH=...
there. Jan 11, 2023 at 10:24
If you're using Python, you can define your environment variables in a .env
file and load them from within a Jupyter notebook using python-dotenv.
Install python-dotenv:
pip install python-dotenv
Load the .env
file in a Jupyter notebook:
%load_ext dotenv
%dotenv
You can setup environment variables in your code as follows:
import sys,os,os.path
sys.path.append(os.path.expanduser('~/code/eol_hsrl_python'))
os.environ['HSRL_INSTRUMENT']='gvhsrl'
os.environ['HSRL_CONFIG']=os.path.expanduser('~/hsrl_config')
This if of course a temporary fix, to get a permanent one, you probably need to export the variables into your ~.profile
, more information can be found here
-
2Thanks Kardaj, exporting the variable in ~/.profile solved it, seems that it's not reading from bashrc which is kinda weird. Jun 17, 2016 at 22:12
-
5michael's answer with
%env MY_VAR=MY_VALUE
should be the correct answer to this question Jan 16, 2018 at 3:35 -
-
3@SidaZhou depends on use case - if you want creds to be available in env - and don't want creds to be in your notebook (e.g. on source control) then this isn't ideal. Apr 29, 2019 at 11:03
-
A gotcha I ran into: The following two commands are equivalent. Note the first cannot use quotes. Somewhat counterintuitively, quoting the string when using %env VAR ...
will result in the quotes being included as part of the variable's value, which is probably not what you want.
%env MYPATH=C:/Folder Name/file.txt
and
import os
os.environ['MYPATH'] = "C:/Folder Name/file.txt"
-
2
-
@Royi Not just on Windows, but in a Jupyter Notebook on Linux, this did not change the environment variable either, at least not well enough: it does change something as it does somehow claim the memory, but it does not seem to fully pass it to the compiler, it seems to be a rights issue of the user that you are in.
%set_env
andos.environ[]
will both fail if code must run with settings from the command prompt and not in Python. See the answers below: 1, 2 with CUDA_VISIBLE_DEVICES proof. Jan 23 at 10:20
If you need the variable set before you're starting the notebook, the only solution which worked for me was env VARIABLE=$VARIABLE jupyter notebook
with export VARIABLE=value
in .bashrc
.
In my case tensorflow needs the exported variable for successful importing it in a notebook.
-
Or if you do not want to change the .bashrc, check this answer with CUDA_VISIBLE_DEVICES proof. Jan 23 at 10:37
A related (short-term) solution is to store your environment variables in a single file, with a predictable format, that can be sourced when starting a terminal and/or read into the notebook. For example, I have a file, .env
, that has my environment variable definitions in the format VARIABLE_NAME=VARIABLE_VALUE
(no blank lines or extra spaces). You can source this file in the .bashrc
or .bash_profile
files when beginning a new terminal session and you can read this into a notebook with something like,
import os
env_vars = !cat ../script/.env
for var in env_vars:
key, value = var.split('=')
os.environ[key] = value
I used a relative path to show that this .env
file can live anywhere and be referenced relative to the directory containing the notebook file. This also has the advantage of not displaying the variable values within your code anywhere.
If you are using systemd I just found out that you seem to have to add them to the systemd unit file. This on Ubuntu 16. Putting them into the .profile and .bashrc (even the /etc/profile) resulted in the ENV Vars not being available in the juypter notebooks.
I had to edit:
/lib/systemd/system/jupyer-notebook.service
and put in the variable i wanted to read in the unit file like:
Environment=MYOWN_VAR=theVar
and only then could I read it from within juypter notebook.
If your notebook is being spawned by a Jupyter Hub, you might need to configure (in jupyterhub_config.py
) the list of environment variables that are allowed to be carried over from the JupyterHub process environment to the Notebook environment by setting
c.Spawner.env_keep = [VAR1, VAR2, ...]
(https://jupyterhub.readthedocs.io/en/stable/api/spawner.html#jupyterhub.spawner.Spawner.env_keep)
See also: Spawner.environment
you can run jupyter notebook with docker and don(t have to manage dependancy leaks.
docker run -p 8888:8888 -v /home/mee/myfolder:/home/jovyan --name notebook1 jupyter/notebook
docker exec -it notebook1 /bin/bash
then kindly ask jupyter about the opened notebooks,
jupyter notebook list
http:// 0.0.0.0:8888/?token=012456788997977a6eb11e45fffff
Url can be copypasted, verify port if you have changed it.
Create a notebook and paste the following, into the notebook
!pip install python-dotenv
import dotenv
%load_ext dotenv
%dotenv
TLDR
In a Jupyter Notebook on Linux, when loading a PyTorch model, I had to run !export CUDA_VISIBLE_DEVICES=0,1
to spread the tensors on two GPUs. It was the only way to get the ParallelData code to work. os.environ[]
or %set_env
do not work.
This answer is almost a duplicate of another answer here
This seems to be the same if you work with Keras, see this answer above, which this answer takes up, but this answer goes a bit further:
- It shall show you more ways to run the code without having to change the .bashrc or the Jupyter Notebook service settings.
- And it stresses that most of the other answers can be plain wrong, at least if you have to split the tensors among GPUs, and there might be other settings where Python code cannot steer the environment variables well enough.
Shell commands: export
/ MY_ENV_VAR=0,1 python some_cuda_executable.py
For CUDA_VISIBLE_DEVICES
and maybe other variables, %set_env
or os.environ[]
do not work, this Stack Overflow How to make Jupyter Notebook run on one specified gpu of multi-gpus? ran into the same thing. Thus, in some settings, you just have to run a command in the CLI and not in Python, and then, there is no way to do this in pure Python code.
Instead:
!export ...
(in a Jupyter Notebook cell together with the code that needs these environment variables; it must be the same cell and must be run at the beginning of it).- You can also run it in a CLI execution like with
CUDA_VISIBLE_DEVICES=4,5,6,7 python forward_test_ins.py
.
The main thing is that you should not write os.environ[]
or %set_env
in the code unless you tweak the .bashrc (untested), see this answer above, or the Jupyter Notebook service settings (untested), see this answer above
But mind: the error will also be thrown outside of a Jupyter Notebook, if you run a python file.
Also check this answer above for the kernel.json file (untested).
Further links
For more, see:
- How to Solve "CUDA: Invalid Device Ordinal" Error in PyTorch Single-GPU Inference on a Model Trained with DataParallel? that proves that it does not work for both
os.environ[]
and%set_env
with code, - this CUDA_VISIBLE_DEVICES example for parallel computing with many GPUs at Transformers Trainer: "RuntimeError: module must have its parameters ... on device cuda:6 (device_ids[0]) but found one of them on device: cuda:0",
- the entries at setting CUDA_VISIBLE_DEVICES just has no effect #9158.
if you have vsCode just create the env like always in the terminal then go to the jupyter file then click select kernel in the top right side then the command palette will open with the env path just click on it and it will take the env.
-
The question is not about a virtual environment but about an environment variable. Jan 15 at 20:47