datacubeR

Optimización Bayesiana

Optimización Bayesiana

Este va a ser un tutorial muy rápido para utilizar Optuna, una librería de optimización Bayesiana y Weights & Biases. El cuál espero pueda llegar a ser mi plataforma para almacenar logs de entrenamientos de modelos.

La verdad he probado otras plataformas como MlFlow (puedes ver un tutorial acá), el cuál fue extremadamente difícil de aprender y sumamente poco intuitivo. Si bien no creo que sea una mala herramienta la verdad no me terminó de convencer. También vi Neptune, pero la verdad es que es una librería relativamente nueva con poquitas estrellas en github.

Por otro lado Weights and Biases se está volviendo la herramienta más utilizada en investigación y la verdad es que es bastante más sencilla que el resto, pero tiene una ligera curva de aprendizaje. En mi opinión, la plataforma es intuitiva, pero siento que debiera de haber un mayor esfuerzo en la documentación.

Para probar esto rápidamente veremos si es posible resolver un proceso de optimización.

Supongamos el siguiente ejemplo: \[y = x^2 + 1\]

Si queremos encontrar el mínimo de esta función es muy sencillo, basta con derivar e igualar a cero:

\(y' = 2x = 0 \rightarrow x = 0\) \(y = 2 \cdot 0 + 1 = 1\)

Luego el mínimo de esta parabola se encuentra en la coordenada (0,1).

Usando Optuna

Optuna es una librería que está pensada para procesos de optimización en general, pero es normalmente utilizada para resolver problemas de Búsqueda de Hiperparámetros. Es decir, se buscan los hiperparámetros que permitan encontrar la métrica óptima, la cual puede ser la máxima (Accuracy o R²) o la mínima (Logloss o MSE).

Por lo tanto uno debe de entregar un rango para probar los hiperparámetros y el número de muestras a sacar de ahí. Dado que el proceso es de optimización Bayesiana, utilizará los resultados anteriores para acercarse de mejor manera al óptimo real sin la necesidad de hacer un GridSearch (revisar todas y cada una de las combinaciones posibles).

Dado que este es un proceso aleatorio, es importante destacar que podria ocurrir que el proceso no converja, por lo que es necesario dar un número de ensayos apropiado para que el proceso funcione como se espera.

import wandb
import optuna

Lo primero es crearse una cuenta en wandb.ai/login. Mi recomendación es utilizar tu cuenta de github ya que permitirá linkear de mejor manera el código de tus repos con la plataforma.

wandb.login()
wandb: Currently logged in as: datacuber (use `wandb login --relogin` to force relogin)





True

wandb.login() permitirá loguearse a la plataforma. La primera vez te solicitará el ingreso de tu contraseña pero de ahí en adelante quedará almacenado en tu sistema. Me gustó mucho el sistema ya que no tuve que configurar nada. Otras plataformas como neptune te piden almacenar tú el TOKEN de autenticación dentro de una variable de entorno y en OS como Windows puede ser más complicado.

Optimizando en Optuna

Vamos a definir un rango en el que sospechamos que está el óptimo que nos interesa, en este caso el mínimo global. Se define como un diccionario de la siguiente manera:

rango = {'min': -10,
        'max': 10}

Luego se define una función, la cual puede llamarse como queramos. Esta función debe tener como parámetro trial, que es un ensayo. La funcion debe escoger un valor del rango dado, en este caso mediante trial.suggest_uniform() y se evaluará en la expresión a optimizar, la cual va en el return.

Existen diversas maneras de muestrar valores, dependiendo si del tipo de variable o de la manera que queramos hacerlo. Para más información ir acá.

def optimize_squared(trial):
    x = trial.suggest_uniform('x', rango['min'], rango['max'])
    return  x** 2 + 1 # nuestra función a optimizar

Finalmente para ejecutar la optimización ejecutaremos un estudio, en el cual queremos minimizar. Elegimos la funcion y el número de ensatos y listo.

study = optuna.create_study(direction = 'minimize')
study.optimize(optimize_squared, n_trials=50)
[I 2021-06-28 23:09:18,931] A new study created in memory with name: no-name-8cdd525d-2017-462c-b030-427b6c38ec32
[I 2021-06-28 23:09:18,937] Trial 0 finished with value: 23.902017490831042 and parameters: {'x': -4.785605237671724}. Best is trial 0 with value: 23.902017490831042.
[I 2021-06-28 23:09:18,941] Trial 1 finished with value: 2.597698699418838 and parameters: {'x': -1.2640010678076337}. Best is trial 1 with value: 2.597698699418838.
[I 2021-06-28 23:09:18,943] Trial 2 finished with value: 31.022405959147648 and parameters: {'x': -5.479270568163946}. Best is trial 1 with value: 2.597698699418838.
[I 2021-06-28 23:09:18,946] Trial 3 finished with value: 67.73334832123625 and parameters: {'x': 8.16904818943041}. Best is trial 1 with value: 2.597698699418838.
[I 2021-06-28 23:09:18,948] Trial 4 finished with value: 24.48057869420729 and parameters: {'x': 4.8456762886316795}. Best is trial 1 with value: 2.597698699418838.
[I 2021-06-28 23:09:18,950] Trial 5 finished with value: 21.09260385213723 and parameters: {'x': 4.4824774234944265}. Best is trial 1 with value: 2.597698699418838.
[I 2021-06-28 23:09:18,952] Trial 6 finished with value: 68.65242786697495 and parameters: {'x': -8.225109596046423}. Best is trial 1 with value: 2.597698699418838.
[I 2021-06-28 23:09:18,954] Trial 7 finished with value: 41.22835487370143 and parameters: {'x': -6.342582665894188}. Best is trial 1 with value: 2.597698699418838.
[I 2021-06-28 23:09:18,955] Trial 8 finished with value: 39.87219088922785 and parameters: {'x': 6.234756682439809}. Best is trial 1 with value: 2.597698699418838.
[I 2021-06-28 23:09:18,956] Trial 9 finished with value: 4.320442596901529 and parameters: {'x': -1.8222081650847493}. Best is trial 1 with value: 2.597698699418838.
[I 2021-06-28 23:09:18,961] Trial 10 finished with value: 1.5979800173633043 and parameters: {'x': 0.7732916767709996}. Best is trial 10 with value: 1.5979800173633043.
[I 2021-06-28 23:09:18,965] Trial 11 finished with value: 1.0345315007217994 and parameters: {'x': 0.18582653395519022}. Best is trial 11 with value: 1.0345315007217994.
[I 2021-06-28 23:09:18,968] Trial 12 finished with value: 7.060819899863781 and parameters: {'x': 2.461873250162116}. Best is trial 11 with value: 1.0345315007217994.
[I 2021-06-28 23:09:18,971] Trial 13 finished with value: 3.9300176669297717 and parameters: {'x': 1.711729437419878}. Best is trial 11 with value: 1.0345315007217994.
[I 2021-06-28 23:09:18,975] Trial 14 finished with value: 1.0468883547570575 and parameters: {'x': 0.21653719024005427}. Best is trial 11 with value: 1.0345315007217994.
[I 2021-06-28 23:09:18,978] Trial 15 finished with value: 8.572676590204937 and parameters: {'x': -2.7518496670793877}. Best is trial 11 with value: 1.0345315007217994.
[I 2021-06-28 23:09:18,981] Trial 16 finished with value: 93.63719717806818 and parameters: {'x': 9.624821929680994}. Best is trial 11 with value: 1.0345315007217994.
[I 2021-06-28 23:09:18,985] Trial 17 finished with value: 10.927202880804595 and parameters: {'x': -3.150746400585835}. Best is trial 11 with value: 1.0345315007217994.
[I 2021-06-28 23:09:18,988] Trial 18 finished with value: 1.0628486643725503 and parameters: {'x': 0.2506963589136273}. Best is trial 11 with value: 1.0345315007217994.
[I 2021-06-28 23:09:18,992] Trial 19 finished with value: 11.672926127784672 and parameters: {'x': 3.266944463529289}. Best is trial 11 with value: 1.0345315007217994.
[I 2021-06-28 23:09:18,996] Trial 20 finished with value: 96.53703677659003 and parameters: {'x': -9.77430492549675}. Best is trial 11 with value: 1.0345315007217994.
[I 2021-06-28 23:09:18,999] Trial 21 finished with value: 1.0073560041574146 and parameters: {'x': 0.08576715080620578}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,002] Trial 22 finished with value: 1.5569047360360124 and parameters: {'x': -0.7462605014577767}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,005] Trial 23 finished with value: 1.6162128738110146 and parameters: {'x': 0.7849922762747508}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,009] Trial 24 finished with value: 15.627953973496544 and parameters: {'x': -3.824650830271509}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,012] Trial 25 finished with value: 12.001754025332767 and parameters: {'x': 3.3168892090832287}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,015] Trial 26 finished with value: 1.1010945372091034 and parameters: {'x': -0.31795367148234577}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,019] Trial 27 finished with value: 5.690980650893347 and parameters: {'x': -2.165867182191315}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,022] Trial 28 finished with value: 3.881763808043724 and parameters: {'x': 1.6975758622352415}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,026] Trial 29 finished with value: 22.704288239315446 and parameters: {'x': -4.658786133674248}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,030] Trial 30 finished with value: 34.57070775311028 and parameters: {'x': 5.794023451204722}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,034] Trial 31 finished with value: 1.786438806863386 and parameters: {'x': 0.8868138513032969}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,038] Trial 32 finished with value: 1.13515549875073 and parameters: {'x': -0.36763500751523914}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,042] Trial 33 finished with value: 3.034146749479723 and parameters: {'x': -1.4262351662610633}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,046] Trial 34 finished with value: 8.00690795051327 and parameters: {'x': 2.647056469082832}. Best is trial 21 with value: 1.0073560041574146.
[I 2021-06-28 23:09:19,049] Trial 35 finished with value: 1.0056068731049788 and parameters: {'x': 0.07487905651768616}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,053] Trial 36 finished with value: 18.948200564457768 and parameters: {'x': 4.236531666877727}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,057] Trial 37 finished with value: 3.0220522532879452 and parameters: {'x': -1.4219888372585578}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,061] Trial 38 finished with value: 41.17041626306449 and parameters: {'x': -6.338013589687583}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,064] Trial 39 finished with value: 3.9786140127217138 and parameters: {'x': 1.725866163038639}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,068] Trial 40 finished with value: 19.75443303437414 and parameters: {'x': -4.330638871387701}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,072] Trial 41 finished with value: 1.067267840439896 and parameters: {'x': 0.25936044501792493}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,076] Trial 42 finished with value: 1.8919174991521621 and parameters: {'x': -0.9444138389245269}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,080] Trial 43 finished with value: 1.0853379277082995 and parameters: {'x': 0.2921265611140137}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,084] Trial 44 finished with value: 5.7274003922895185 and parameters: {'x': -2.1742585845040416}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,088] Trial 45 finished with value: 2.7663831230225453 and parameters: {'x': 1.3290534688350748}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,092] Trial 46 finished with value: 9.336070192939113 and parameters: {'x': -2.8872253450222956}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,096] Trial 47 finished with value: 7.29747062572058 and parameters: {'x': 2.5094761656012157}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,100] Trial 48 finished with value: 15.399524503716533 and parameters: {'x': 3.7946705395483984}. Best is trial 35 with value: 1.0056068731049788.
[I 2021-06-28 23:09:19,104] Trial 49 finished with value: 1.000010000625206 and parameters: {'x': 0.003162376512363424}. Best is trial 49 with value: 1.000010000625206.

Para chequear los resultados podemos hacer lo siguiente:

print('x óptimo: ', study.best_params)
print('y óptimo', study.best_value)
x óptimo:  {'x': 0.003162376512363424}
y óptimo 1.000010000625206

También podemos sacar un dataframe en pandas con todos los ensayos realizados y analizarlo a nuestro gusto.

study.trials_dataframe()
numbervaluedatetime_startdatetime_completedurationparams_xstate
0023.9020172021-06-28 23:09:18.9353322021-06-28 23:09:18.9364170 days 00:00:00.001085-4.785605COMPLETE
112.5976992021-06-28 23:09:18.9398682021-06-28 23:09:18.9403570 days 00:00:00.000489-1.264001COMPLETE
2231.0224062021-06-28 23:09:18.9424242021-06-28 23:09:18.9428690 days 00:00:00.000445-5.479271COMPLETE
3367.7333482021-06-28 23:09:18.9449342021-06-28 23:09:18.9454290 days 00:00:00.0004958.169048COMPLETE
4424.4805792021-06-28 23:09:18.9475172021-06-28 23:09:18.9479710 days 00:00:00.0004544.845676COMPLETE
5521.0926042021-06-28 23:09:18.9497082021-06-28 23:09:18.9500990 days 00:00:00.0003914.482477COMPLETE
6668.6524282021-06-28 23:09:18.9517132021-06-28 23:09:18.9520760 days 00:00:00.000363-8.225110COMPLETE
7741.2283552021-06-28 23:09:18.9536252021-06-28 23:09:18.9540000 days 00:00:00.000375-6.342583COMPLETE
8839.8721912021-06-28 23:09:18.9550542021-06-28 23:09:18.9553100 days 00:00:00.0002566.234757COMPLETE
994.3204432021-06-28 23:09:18.9563722021-06-28 23:09:18.9566640 days 00:00:00.000292-1.822208COMPLETE
10101.5979802021-06-28 23:09:18.9577512021-06-28 23:09:18.9616950 days 00:00:00.0039440.773292COMPLETE
11111.0345322021-06-28 23:09:18.9628702021-06-28 23:09:18.9654500 days 00:00:00.0025800.185827COMPLETE
12127.0608202021-06-28 23:09:18.9659722021-06-28 23:09:18.9686420 days 00:00:00.0026702.461873COMPLETE
13133.9300182021-06-28 23:09:18.9691242021-06-28 23:09:18.9717230 days 00:00:00.0025991.711729COMPLETE
14141.0468882021-06-28 23:09:18.9722692021-06-28 23:09:18.9751360 days 00:00:00.0028670.216537COMPLETE
15158.5726772021-06-28 23:09:18.9756702021-06-28 23:09:18.9781260 days 00:00:00.002456-2.751850COMPLETE
161693.6371972021-06-28 23:09:18.9785722021-06-28 23:09:18.9818060 days 00:00:00.0032349.624822COMPLETE
171710.9272032021-06-28 23:09:18.9822432021-06-28 23:09:18.9853870 days 00:00:00.003144-3.150746COMPLETE
18181.0628492021-06-28 23:09:18.9861362021-06-28 23:09:18.9884840 days 00:00:00.0023480.250696COMPLETE
191911.6729262021-06-28 23:09:18.9889752021-06-28 23:09:18.9925690 days 00:00:00.0035943.266944COMPLETE
202096.5370372021-06-28 23:09:18.9931332021-06-28 23:09:18.9959060 days 00:00:00.002773-9.774305COMPLETE
21211.0073562021-06-28 23:09:18.9962792021-06-28 23:09:18.9990830 days 00:00:00.0028040.085767COMPLETE
22221.5569052021-06-28 23:09:18.9994422021-06-28 23:09:19.0023910 days 00:00:00.002949-0.746261COMPLETE
23231.6162132021-06-28 23:09:19.0028442021-06-28 23:09:19.0057570 days 00:00:00.0029130.784992COMPLETE
242415.6279542021-06-28 23:09:19.0061502021-06-28 23:09:19.0090110 days 00:00:00.002861-3.824651COMPLETE
252512.0017542021-06-28 23:09:19.0093922021-06-28 23:09:19.0121530 days 00:00:00.0027613.316889COMPLETE
26261.1010952021-06-28 23:09:19.0125802021-06-28 23:09:19.0153440 days 00:00:00.002764-0.317954COMPLETE
27275.6909812021-06-28 23:09:19.0156952021-06-28 23:09:19.0189040 days 00:00:00.003209-2.165867COMPLETE
28283.8817642021-06-28 23:09:19.0196472021-06-28 23:09:19.0220720 days 00:00:00.0024251.697576COMPLETE
292922.7042882021-06-28 23:09:19.0226802021-06-28 23:09:19.0265010 days 00:00:00.003821-4.658786COMPLETE
303034.5707082021-06-28 23:09:19.0270232021-06-28 23:09:19.0307060 days 00:00:00.0036835.794023COMPLETE
31311.7864392021-06-28 23:09:19.0311412021-06-28 23:09:19.0342420 days 00:00:00.0031010.886814COMPLETE
32321.1351552021-06-28 23:09:19.0349232021-06-28 23:09:19.0384230 days 00:00:00.003500-0.367635COMPLETE
33333.0341472021-06-28 23:09:19.0389792021-06-28 23:09:19.0421550 days 00:00:00.003176-1.426235COMPLETE
34348.0069082021-06-28 23:09:19.0428662021-06-28 23:09:19.0459190 days 00:00:00.0030532.647056COMPLETE
35351.0056072021-06-28 23:09:19.0463552021-06-28 23:09:19.0494600 days 00:00:00.0031050.074879COMPLETE
363618.9482012021-06-28 23:09:19.0498482021-06-28 23:09:19.0532480 days 00:00:00.0034004.236532COMPLETE
37373.0220522021-06-28 23:09:19.0539922021-06-28 23:09:19.0573800 days 00:00:00.003388-1.421989COMPLETE
383841.1704162021-06-28 23:09:19.0578402021-06-28 23:09:19.0609860 days 00:00:00.003146-6.338014COMPLETE
39393.9786142021-06-28 23:09:19.0615692021-06-28 23:09:19.0645790 days 00:00:00.0030101.725866COMPLETE
404019.7544332021-06-28 23:09:19.0651402021-06-28 23:09:19.0686160 days 00:00:00.003476-4.330639COMPLETE
41411.0672682021-06-28 23:09:19.0693042021-06-28 23:09:19.0727590 days 00:00:00.0034550.259360COMPLETE
42421.8919172021-06-28 23:09:19.0733842021-06-28 23:09:19.0768150 days 00:00:00.003431-0.944414COMPLETE
43431.0853382021-06-28 23:09:19.0772782021-06-28 23:09:19.0803160 days 00:00:00.0030380.292127COMPLETE
44445.7274002021-06-28 23:09:19.0808882021-06-28 23:09:19.0840640 days 00:00:00.003176-2.174259COMPLETE
45452.7663832021-06-28 23:09:19.0847482021-06-28 23:09:19.0882060 days 00:00:00.0034581.329053COMPLETE
46469.3360702021-06-28 23:09:19.0887382021-06-28 23:09:19.0921690 days 00:00:00.003431-2.887225COMPLETE
47477.2974712021-06-28 23:09:19.0927722021-06-28 23:09:19.0961480 days 00:00:00.0033762.509476COMPLETE
484815.3995252021-06-28 23:09:19.0969372021-06-28 23:09:19.1007240 days 00:00:00.0037873.794671COMPLETE
49491.0000102021-06-28 23:09:19.1013932021-06-28 23:09:19.1043370 days 00:00:00.0029440.003162COMPLETE

Los resultados son los esperados, x debe ser cercano a 0 e y cercano a 1.

Guardando los resultados en wandb

Para almacenar los resultados en Weights & Biases podemos hacer wandb.init() y agregaremos el nombre de un proyecto. Además podemos agregar algunos tags para identificar el run que haremos.

Un run sería equivalente a un modelo en el que se probarán distintos hiperparámetros. Cada uno de los ensayos serán combinaciones distintas de hiperparámetros y se irán almacenando mediante run.log(). En nuestro caso almacenaremos step como el número del experimento y trial.params almacenará todos los parámetros que están siendo optimizados, en nuestro caso x. Luego podemos mediante trial.value almacenar el resultado de nuestro valor objetivo.

Finalmente mediante run.summary podemos almacenar lo que nosotros queramos. En mi caso me gusta almacenar el mejor y y los parámetros óptimos.

with wandb.init(project="nuevo-proyecto",
                tags = ['optimización','cuadrática']) as run:
    for step, trial in enumerate(study.trials):

        run.log(trial.params, step = step)
        run.log({"y": trial.value})

    run.summary['best_y'] = study.best_value
    run.summary['best_params'] = study.best_params

Tracking run with wandb version 0.10.32
Syncing run dark-sky-3 to Weights & Biases (Documentation).
Project page: https://wandb.ai/datacuber/nuevo-proyecto
Run page: https://wandb.ai/datacuber/nuevo-proyecto/runs/29ttsi72
Run data is saved locally in /home/alfonso/Documents/kaggle/titanic/wandb/run-20210628_230927-29ttsi72


Waiting for W&B process to finish, PID 442316
Program ended successfully.

VBox(children=(Label(value=' 0.00MB of 0.00MB uploaded (0.00MB deduped)\r'), FloatProgress(value=0.0, max=1.0)…

Find user logs for this run at: /home/alfonso/Documents/kaggle/titanic/wandb/run-20210628_230927-29ttsi72/logs/debug.log

Find internal logs for this run at: /home/alfonso/Documents/kaggle/titanic/wandb/run-20210628_230927-29ttsi72/logs/debug-internal.log

Run summary:


<table class="wandb">x0.00316y1.00001_runtime5_timestamp1624936172_step49best_y1.00001

</table>

Run history:


<table class="wandb">x▃▄▃▇▆▂▂▇▅▅▅▅▄█▃▅▁▅▄▅▆▄▄▅▇▅▄▄▅▆▄▂▃▅▄▅▅▃▅▅y▃▁▃▆▂▆▄▄▁▁▁▁▂█▂▁█▁▁▁▂▁▁▁▃▁▁▁▁▂▁▄▂▁▁▁▁▂▁▁_runtime▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁_timestamp▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁_step▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇███

</table>

Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)


Synced dark-sky-3: https://wandb.ai/datacuber/nuevo-proyecto/runs/29ttsi72

Los resultados pueden ser visualizados en el portal de wandb mediante el link entregado. En el portal uno puede agregar todos los gráficos que requiera. La idea es poder visualizar el proyecto de la mejor manera posible:


En este rápido tutorial se puede ver cómo utilizar Weights & Biases de manera muy rápida. Espero poder realizar otro en el que se puedan ver más beneficios de utilizar esta herramienta pero enfocado de lleno en su uso con modelos de Machine Learning.

api = wandb.Api()
# run is specified by <entity>/<project>/<run id>
run = api.run("datacuber/optuna/2k1vy0cj")
metrics_df = run.history()
metrics_df
_stepx_runtimemsez_timestamp
00-6.377177285.6805868.8167791613775185
11-0.0696292169.4601927.5436581613775185
22-4.1193542315.556214-5.8222741613775185
33-9.321491214.4587453.7595131613775185
44-4.059086210.4514531.4131081613775185
.....................
95954.88097120.139162-1.6270071613775185
96964.535738213.287564-3.0904751613775185
97975.19069023.631573-2.5481791613775185
98985.976324210.780678-3.6298591613775185
99994.88812320.010747-1.4958961613775185

100 rows × 6 columns

system_metrics = run.history(stream = 'events')
system_metrics
system.network.sentsystem.network.recvsystem.disk_wandbsystem.gpu.0.tempsystem.gpu.0.memorysystem.gpu.0.gpu_runtimesystem.proc.memory.rssMBsystem.proc.memory.availableMBsystem.cpusystem.proc.cpu.threadssystem.memorysystem.proc.memory.percentsystem.gpu.0.memoryAllocated_timestamp
0272763651328.5True46113301.9428374.519.0523.511.40.944.271613775186
run.summary
{'z': -1.4958962538521758, 'mse': 0.010747402305021076, 'best': {'x': 4.888122828892169, 'z': -1.4958962538521758}, '_step': 99, '_runtime': 2, '_timestamp': 1613775185, 'x': 4.888122828892169}

Go to top