2023 Dec 7th – UQ PUG 4

Welcome to UQ Python User Group! Check out our general information for details about who we are and what we do.

Structure

  1. We start today by adding our names to the table below
  2. Add your questions to this page
  3. This month’s presentation
  4. Finally, we spend the rest of the session answering the questions you’ve brought!

Mailing list

If you would like to be on the mailing list and receive the latest PUG updates, please sign up here:

https://forms.office.com/r/6qvfFX0qGr

Feel free to send this link to anyone you think may benefit.

Training Resources

We offer Python training sessions and resources, you can find our introductory guide here.

Introduce yourself

What’s your name? Where are you from? Why are you here?
Karen UQ Busines School to learn
Cameron UQ Library To help and learn
Po-Yen UQ Business school Get some help and learn I guess :)
Sagar Bangladesh Learn Python

Questions

If you have any Python questions you’d like to explore with the group, please put them in a markdown cell, with any code you’d like us to run in a Python cell.

Question 1 - Question - Name

Question about the CycleGAN architecture. Po-Yen

#I don't get the part under "try" and the part of "except"
#I also find that onece I load TPU, I have to wait to use this engine
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_addons as tfa

from kaggle_datasets import KaggleDatasets
import matplotlib.pyplot as plt
import numpy as np

try:
    tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
    print('Device:', tpu.master())
    tf.config.experimental_connect_to_cluster(tpu)
    tf.tpu.experimental.initialize_tpu_system(tpu)
    strategy = tf.distribute.experimental.TPUStrategy(tpu)
except:
    strategy = tf.distribute.get_strategy()
print('Number of replicas:', strategy.num_replicas_in_sync)

AUTOTUNE = tf.data.experimental.AUTOTUNE
    
print(tf.__version__)
a = 5

try:
    "apple"**2
except ValueError:
    print("We're inside the valueerror section")
except TypeError:
    print("We're inside the typeerror section")

    

Answers

You may be able to process your data on some kind of HPC/loud computing platform.

At UQ, you can get access to the Weiner HPC, which has GPUs optimised for ML.

You should still be eligible for Nectar. This should be fairly easy to sign up to, but your access may disappear when you graduate.

You might be able to sign up for MLeRP. This may take more effort to sign up to thena Nectar, but in my previous discussions with people at MLeRP, they may be more accepting of people who aren’t attached to a research institution.

Question 2 - Question - Name

Add more details here

!pip install -qU langchain openai transformers
from langchain.tools import BaseTool
from langchain.agents import initialize_agent
help(initialize_agent)
## Can you tell me what does the error code mean

agent(f"What does this image show?\n{img_url}")

> Entering new AgentExecutor chain...
{
    "action": "Image captioner",
    "action_input": "https://images.unsplash.com/photo-1616128417859-3a984dd35f02?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=2372&q=80"
}
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_5440\710257616.py in <module>
----> 1 agent(f"What does this image show?\n{img_url}")

~\anaconda3\lib\site-packages\langchain\chains\base.py in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    310         except BaseException as e:
    311             run_manager.on_chain_error(e)
--> 312             raise e
    313         run_manager.on_chain_end(outputs)
    314         final_outputs: Dict[str, Any] = self.prep_outputs(

~\anaconda3\lib\site-packages\langchain\chains\base.py in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    304         try:
    305             outputs = (
--> 306                 self._call(inputs, run_manager=run_manager)
    307                 if new_arg_supported
    308                 else self._call(inputs)

~\anaconda3\lib\site-packages\langchain\agents\agent.py in _call(self, inputs, run_manager)
   1310         # We now enter the agent loop (until it returns something).
   1311         while self._should_continue(iterations, time_elapsed):
-> 1312             next_step_output = self._take_next_step(
   1313                 name_to_tool_map,
   1314                 color_mapping,

~\anaconda3\lib\site-packages\langchain\agents\agent.py in _take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
   1036     ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
   1037         return self._consume_next_step(
-> 1038             [
   1039                 a
   1040                 for a in self._iter_next_step(

~\anaconda3\lib\site-packages\langchain\agents\agent.py in <listcomp>(.0)
   1036     ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
   1037         return self._consume_next_step(
-> 1038             [
   1039                 a
   1040                 for a in self._iter_next_step(

~\anaconda3\lib\site-packages\langchain\agents\agent.py in _iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
   1132                     tool_run_kwargs["llm_prefix"] = ""
   1133                 # We then call the tool on the tool input to get an observation
-> 1134                 observation = tool.run(
   1135                     agent_action.tool_input,
   1136                     verbose=self.verbose,

~\anaconda3\lib\site-packages\langchain_core\tools.py in run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
    363         except (Exception, KeyboardInterrupt) as e:
    364             run_manager.on_tool_error(e)
--> 365             raise e
    366         else:
    367             run_manager.on_tool_end(

~\anaconda3\lib\site-packages\langchain_core\tools.py in run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
    337                 self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
    338                 if new_arg_supported
--> 339                 else self._run(*tool_args, **tool_kwargs)
    340             )
    341         except ToolException as e:

~\AppData\Local\Temp\ipykernel_5440\1932156217.py in _run(self, url)
     12         image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
     13         # preprocess the image
---> 14         inputs = processor(image, return_tensors="pt").to(device)
     15         # generate the caption
     16         out = model.generate(**inputs, max_new_tokens=20)

NameError: name 'processor' is not defined

Question 3 - Question - Name

Add more details here

## Code for Q3

Question 4 - Question - Name

Add more details here

## Code for Q4

Question 5 - Question - Name

Add more details here

Question 6 - Question - Name

Add more details here

## Code for Q6