"
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- },
- {
- "data": {
- "text/html": [
- "The key notions of CLP are those of an algebra and an associated constraint solver over a class of constraints, namely a set of first order formulas including the always satisfiable constraint true, the un- satisfiable constraint false, and closed under variable renaming, conjunction and existential quantification."
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- },
- {
- "data": {
- "text/html": [
- "
"
],
"text/plain": [
""
@@ -398,7 +350,7 @@
{
"data": {
"text/html": [
- "A distinguishing feature of CLP(Intervals) is that it decomposes equations, or other composite expressions, into primitive constraints. These primitive con- straints are the relational versions of the building blocks of expressions, which are admissible functions."
+ "A CLP(FD) system provides primitives for accessing and updating attribute values. A CLP(FD) system provides equality (=), disequality (6=), and inequality con- straints. In addition, a CLP(FD) system also provides some other constraints such as global constraints."
],
"text/plain": [
""
@@ -410,7 +362,7 @@
{
"data": {
"text/html": [
- "
"
],
"text/plain": [
""
@@ -422,7 +374,7 @@
{
"data": {
"text/html": [
- "A CLP(FD) system provides primitives for accessing and updating attribute values. A CLP(FD) system provides equality (=), disequality (6=), and inequality con- straints. In addition, a CLP(FD) system also provides some other constraints such as global constraints."
+ "CLP(FD) languages have been suc- cessfully used for solving a variety of industrial and academic problems. However, in some constraint problems, where domain elements need to be acquired, it may not be wise to perform the acquisition of the whole domains of variables before the beginning of the constraint propagation process."
],
"text/plain": [
""
@@ -434,7 +386,7 @@
{
"data": {
"text/html": [
- "
"
],
"text/plain": [
""
@@ -446,7 +398,7 @@
{
"data": {
"text/html": [
- "CLP(FD) languages have been suc- cessfully used for solving a variety of industrial and academic problems. However, in some constraint problems, where domain elements need to be acquired, it may not be wise to perform the acquisition of the whole domains of variables before the beginning of the constraint propagation process."
+ "A CLP(FD) program searches a solution for a set of variables which take values over finite domains and which must verify a set of constraints. The evolution of the domains can be viewed as a sequence of applications of reduction operators attached to the constraints."
],
"text/plain": [
""
@@ -458,7 +410,7 @@
{
"data": {
"text/html": [
- "
"
],
"text/plain": [
""
@@ -470,7 +422,7 @@
{
"data": {
"text/html": [
- "A CLP(FD) program searches a solution for a set of variables which take values over finite domains and which must verify a set of constraints. The evolution of the domains can be viewed as a sequence of applications of reduction operators attached to the constraints."
+ "A proof procedure for CLP is defined as an extension of standard resolution. A state is defined as a pair 〈← a, A || C〉 of a goal and a set of constraints. At each step of the computation, some literal a is selected from the current goal according to some selection function."
],
"text/plain": [
""
@@ -482,7 +434,7 @@
{
"data": {
"text/html": [
- "
"
],
"text/plain": [
""
@@ -494,7 +446,7 @@
{
"data": {
"text/html": [
- "A proof procedure for CLP is defined as an extension of standard resolution. A state is defined as a pair 〈← a, A || C〉 of a goal and a set of constraints. At each step of the computation, some literal a is selected from the current goal according to some selection function."
+ "CLP combines the advantages of two declarative paradigms: logic programming (Prolog) and constraint solving. In logic program- ming, problems are stated in a declarative way using rules to define relations (predi- cates). Problems are solved using chronological backtrack search to explore choices."
],
"text/plain": [
""
@@ -506,7 +458,7 @@
{
"data": {
"text/html": [
- "
"
],
"text/plain": [
""
@@ -518,7 +470,7 @@
{
"data": {
"text/html": [
- "To this end, we discuss the formal basics of Constraint Logic Programming (CLP), which is used here to provide an operational treatment of various declarative constraint-based grammars. This is done by an embedding of the logical description languages of such grammars into a CLP scheme, yielding Constraint Logic Grammars (CLGs)."
+ "Immunosorbent electron microscopy was used to quantify recombinant baculovirus-generated bluetongue virus (BTV) core-like particles (CLP) in either purified preparations or lysates of recombinant baculovirus-infected cells. The capture antibody was an anti-BTV VP7 monoclonal antibody."
],
"text/plain": [
""
@@ -530,7 +482,7 @@
{
"data": {
"text/html": [
- "
"
],
"text/plain": [
""
@@ -542,7 +494,7 @@
{
"data": {
"text/html": [
- "Immunosorbent electron microscopy was used to quantify recombinant baculovirus-generated bluetongue virus (BTV) core-like particles (CLP) in either purified preparations or lysates of recombinant baculovirus-infected cells. The capture antibody was an anti-BTV VP7 monoclonal antibody."
+ "We will define an amalgamated proof system that combines inference rules from intuitionistic sequent calculus with constraint entailment, in such a way that the key property of an abstract logic programming language is preserved."
],
"text/plain": [
""
@@ -554,7 +506,7 @@
{
"data": {
"text/html": [
- "
'))\n",
"\n",
"for index,search_results in agg_search_results.items():\n",
- " for result in search_results['@search.answers']:\n",
- " if result['score'] > 0.5: # Show answers that are at least 50% of the max possible score=1\n",
- " display(HTML('
'))\n",
- " display(HTML(result['text']))\n",
+ " if '@search_answers' in search_results:\n",
+ " for result in search_results['@search.answers']:\n",
+ " if result['score'] > 0.5: # Show answers that are at least 50% of the max possible score=1\n",
+ " display(HTML('
'))\n",
@@ -846,7 +799,7 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 11,
"id": "eea62a7d-7e0e-4a93-a89c-20c96560c665",
"metadata": {},
"outputs": [],
@@ -897,7 +850,7 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": 12,
"id": "13df9247-e784-4e04-9475-55e672efea47",
"metadata": {},
"outputs": [],
@@ -909,7 +862,7 @@
},
{
"cell_type": "code",
- "execution_count": 9,
+ "execution_count": 13,
"id": "7b0520b9-83b2-49fd-ad84-624cb0f15ce1",
"metadata": {},
"outputs": [
@@ -933,7 +886,7 @@
},
{
"cell_type": "code",
- "execution_count": 10,
+ "execution_count": 15,
"id": "dcc7dae3-6b88-4ea6-be43-b178ebc559dc",
"metadata": {},
"outputs": [
@@ -941,11 +894,11 @@
"data": {
"text/plain": [
"{'question': 'What is CLP?',\n",
- " 'language': 'French',\n",
- " 'text': \"CLP, ou Classification, Labelling and Packaging, est un système de classification, d'étiquetage et d'emballage des produits chimiques utilisé dans l'Union européenne. Il vise à informer les utilisateurs sur les dangers des produits chimiques et à promouvoir une utilisation sûre. Le CLP repose sur des critères de classification harmonisés au niveau international et utilise des pictogrammes, des mentions de danger et des conseils de prudence pour communiquer les informations de manière claire et compréhensible.\"}"
+ " 'language': 'Russian',\n",
+ " 'text': 'CLP (Classification, Labelling and Packaging) - это система классификации, маркировки и упаковки химических веществ и смесей в Европейском союзе.'}"
]
},
- "execution_count": 10,
+ "execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
@@ -953,7 +906,7 @@
"source": [
"# And finnaly we create our first generic chain\n",
"chain_chat = LLMChain(llm=llm, prompt=prompt)\n",
- "chain_chat({\"question\": QUESTION, \"language\": \"French\"})"
+ "chain_chat({\"question\": QUESTION, \"language\": \"Russian\"})"
]
},
{
@@ -1030,7 +983,7 @@
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": 16,
"id": "12682a1b-df92-49ce-a638-7277103f6cb3",
"metadata": {},
"outputs": [],
@@ -1050,7 +1003,7 @@
},
{
"cell_type": "code",
- "execution_count": 12,
+ "execution_count": 17,
"id": "3bccca45-d1dd-476f-b109-a528b857b6b3",
"metadata": {},
"outputs": [
@@ -1069,14 +1022,23 @@
]
},
{
- "cell_type": "code",
- "execution_count": 13,
+ "cell_type": "markdown",
"id": "7714f38a-daaa-4fc5-a95a-dd025d153216",
"metadata": {},
+ "source": [
+ "# Uncomment the below line if you want to inspect the ordered results\n",
+ "#ordered_results"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "639b044e",
+ "metadata": {},
"outputs": [],
"source": [
"# Uncomment the below line if you want to inspect the ordered results\n",
- "# ordered_results"
+ "#ordered_results"
]
},
{
@@ -1089,7 +1051,7 @@
},
{
"cell_type": "code",
- "execution_count": 14,
+ "execution_count": 18,
"id": "2937ba3b-098d-43f8-8498-3534882a5cc7",
"metadata": {},
"outputs": [],
@@ -1099,7 +1061,7 @@
},
{
"cell_type": "code",
- "execution_count": 15,
+ "execution_count": 19,
"id": "f664df30-99c3-4a30-8cb0-42ba3044e5b0",
"metadata": {},
"outputs": [
@@ -1107,28 +1069,218 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "Vectorizing 7 chunks from Document: https://demodatasetsp.blob.core.windows.net/arxivcs/arxivcs/0508/0508108v1.pdf\n",
- "Vectorizing 5 chunks from Document: https://demodatasetsp.blob.core.windows.net/arxivcs/arxivcs/0701/0701082v1.pdf\n",
- "Vectorizing 8 chunks from Document: https://demodatasetsp.blob.core.windows.net/arxivcs/arxivcs/0508/0508106v1.pdf\n",
- "Vectorizing 8 chunks from Document: https://demodatasetsp.blob.core.windows.net/arxivcs/arxivcs/0011/0011030v1.pdf\n",
- "Vectorizing 7 chunks from Document: https://demodatasetsp.blob.core.windows.net/arxivcs/arxivcs/0106/0106008v1.pdf\n",
- "Vectorizing 14 chunks from Document: https://demodatasetsp.blob.core.windows.net/arxivcs/arxivcs/0506/0506005v1.pdf\n",
- "Vectorizing 13 chunks from Document: https://demodatasetsp.blob.core.windows.net/arxivcs/arxivcs/0408/0408056v1.pdf\n",
- "Vectorizing 11 chunks from Document: https://demodatasetsp.blob.core.windows.net/arxivcs/arxivcs/0310/0310042v1.pdf\n",
- "Vectorizing 8 chunks from Document: https://demodatasetsp.blob.core.windows.net/arxivcs/arxivcs/0003/0003026v1.pdf\n",
- "Vectorizing 66 chunks from Document: https://demodatasetsp.blob.core.windows.net/arxivcs/arxivcs/0008/0008036v1.pdf\n",
- "Vectorizing 1 chunks from Document: https://www.ncbi.nlm.nih.gov/pubmed/10403670/\n",
- "Vectorizing 1 chunks from Document: https://www.ncbi.nlm.nih.gov/pubmed/17047515/\n",
- "Vectorizing 1 chunks from Document: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7103146/\n",
- "Vectorizing 1 chunks from Document: https://api.elsevier.com/content/article/pii/S0033318220301420; https://www.sciencedirect.com/science/article/pii/S0033318220301420?v=s5\n",
- "Vectorizing 1 chunks from Document: https://www.ncbi.nlm.nih.gov/pubmed/16286234/\n",
- "Vectorizing 1 chunks from Document: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7131174/\n",
- "Vectorizing 1 chunks from Document: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5809586/\n",
- "Vectorizing 1 chunks from Document: https://www.ncbi.nlm.nih.gov/pubmed/17414124/\n",
- "Vectorizing 1 chunks from Document: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7131086/\n",
- "Vectorizing 1 chunks from Document: https://doi.org/10.1111/bjh.14134; https://www.ncbi.nlm.nih.gov/pubmed/27173746/\n",
- "CPU times: user 8.23 s, sys: 208 ms, total: 8.44 s\n",
- "Wall time: 47.8 s\n"
+ "Vectorizing 7 chunks from Document: https://datasetsgptsmartsearch.blob.core.windows.net/arxivcs/pdf/0508/0508108v1.pdf\n",
+ "Vectorizing 5 chunks from Document: https://datasetsgptsmartsearch.blob.core.windows.net/arxivcs/pdf/0701/0701082v1.pdf\n",
+ "Vectorizing 8 chunks from Document: https://datasetsgptsmartsearch.blob.core.windows.net/arxivcs/pdf/0508/0508106v1.pdf\n",
+ "Vectorizing 8 chunks from Document: https://datasetsgptsmartsearch.blob.core.windows.net/arxivcs/pdf/0011/0011030v1.pdf\n",
+ "Exception: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))\n",
+ "The ACLP representation is in the middle of the declarative FOL representation\n",
+ "and the more procedural CLP representation.\n",
+ "\n",
+ "\n",
+ "\n",
+ "3 Experiments\n",
+ "\n",
+ "3.1 The Systems\n",
+ "\n",
+ "The finite domain CLP package is the one provided with ECLiPSe version 4.2.\n",
+ "Both abductive systems, ACLP [10] and SLDNFAC, [3] are meta interpreters\n",
+ "\n",
+ "written in Prolog, running on ECLiPSe version 4.2 and making use of its fi-\n",
+ "nite domain library. For all these systems, a search strategy which first selects\n",
+ "variables with the smallest domain which participate in the largest number of\n",
+ "constraints was used.\n",
+ "\n",
+ "The model generator SEM version 1.7 is a fine tuned package written in C.\n",
+ "smodels version 2.25, the system for computing stable models, is implemented in\n",
+ "C++ and the associated program used for grounding is lparse version 0.99.54.\n",
+ "All experiments have been done on the same hardware, namely Pentium II.\n",
+ "\n",
+ "3.2 Graph Coloring\n",
+ "\n",
+ "0.01\n",
+ "\n",
+ "0.1\n",
+ "\n",
+ "1\n",
+ "\n",
+ "10\n",
+ "\n",
+ "100\n",
+ "\n",
+ "1000\n",
+ "\n",
+ "10000\n",
+ "\n",
+ "10 100 1000\n",
+ "\n",
+ "T\n",
+ "im\n",
+ "\n",
+ "e \n",
+ "(s\n",
+ "\n",
+ "ec\n",
+ ".)\n",
+ "\n",
+ "Nodes\n",
+ "\n",
+ "Graph Coloring\n",
+ "\n",
+ "ACLP\n",
+ "SLDNFAC\n",
+ "\n",
+ "SEM\n",
+ "CLP\n",
+ "\n",
+ "smodels\n",
+ "\n",
+ "Fig. 1. Graph coloring\n",
+ "\n",
+ "Our first experiment is done with 4-colorable graphs. We used a graph gen-\n",
+ "erator1 program which is available from address http://web.cs.ualberta.ca/\n",
+ "1 The graphs have been generated with the following parameters: 0, 13, 6, n, 4, 0.2, 1,\n",
+ "\n",
+ "0 where n is the number of vertices. Graph-coloring problems generated with these\n",
+ "parameters are difficult.\n",
+ "\n",
+ "\n",
+ "\n",
+ "~joe/Coloring/Generators/generate.html. We applied the systems in a se-\n",
+ "quence of experiments with graphs of increasing size and constant number of\n",
+ "colors. We have modified only one parameter of the problem namely the number\n",
+ "of vertices. Figure 1 gives the results of solving the problem with the different\n",
+ "systems. Both axes are plotted in a logarithmic scale. On the x-axis we have put\n",
+ "the number of vertices. Not surprisingly, CLP is the fastest system. The times\n",
+ "for smodels is second best on this problem. We assume it is in part because of\n",
+ "the very concise formulation. Using the so called technique of rules with excep-\n",
+ "tions [14], the two rules needed to describe the space of candidate solutions also\n",
+ "encode the constraint that the color is a function of the vertex. Hence there is\n",
+ "only one other rule, namely the constraint that two adjacent vertices must have\n",
+ "a different color. The difference with CLP is almost two orders of magnitude\n",
+ "for the largest problems. The times reported for smodels do not include the\n",
+ "time for grounding the problem, these times only consist of a small part of the\n",
+ "total time. Grounding the problem for 650 nodes takes only 10 seconds, whereas\n",
+ "solving the problem takes over 100 seconds. SLDNFAC is slightly worse than\n",
+ "smodels. Although meta-interpretation overhead tends to increase with prob-\n",
+ "lems size, the difference with smodels grows very slowly. The model generator\n",
+ "SEM deteriorates much faster and runs out of memory for the larger problems.\n",
+ "The fact that it grounds the whole theory is a likely explanation. The differ-\n",
+ "ence with smodels supports the claim that smodels has better techniques for\n",
+ "grounding. ACLP performs substantially worse than SLDNFAC and also dete-\n",
+ "riorates faster. The difference is likely due to the function-specification available\n",
+ "in SLDNFAC. Contrary to ACLP, SLDNFAC exploits the knowledge that the\n",
+ "abducible encodes a function to reduce the number of explicitly stored integrity\n",
+ "constraints.\n",
+ "\n",
+ "3.3 N-Queens\n",
+ "\n",
+ "Figure 2 gives the running times for the different systems for finding a first\n",
+ "solution. Both axes are plotted on a linear scale. The time consumed while\n",
+ "grounding is again not included in the graph (for 18 queens, half a second).\n",
+ "Again, CLP gives the best results. SLDNFAC is second best and, although meta-\n",
+ "interpretation overhead increases with problem size, deteriorates very slowly.\n",
+ "ACLP is third2, with a small difference, probably due to the lack of the function-\n",
+ "specification mentioned in the section above. The next one is SEM. It runs out\n",
+ "of memory for large problems (it needs about 120MB for 27 queens). smodels\n",
+ "\n",
+ "performs very poorly on this problem, in particular when compared with its\n",
+ "performance on the graph coloring problem. It is well-known that to obtain good\n",
+ "results for computing the first solution for the n-queens problem, a good search\n",
+ "heuristic is needed, like the first fail principle used by the systems based on CLP.\n",
+ "We believe that the bad performance of smodels is explained by the absence of\n",
+ "\n",
+ "2 The results with ACLP are substantially better than those in the previous paper [17].\n",
+ "This is due to the removal of a redundant and time consuming complete consistency\n",
+ "check after the processing of each new CLP constraint.\n",
+ "\n",
+ "\n",
+ "\n",
+ "0\n",
+ "\n",
+ "2\n",
+ "\n",
+ "4\n",
+ "\n",
+ "6\n",
+ "\n",
+ "8\n",
+ "\n",
+ "10\n",
+ "\n",
+ "10 15 20 25 30\n",
+ "\n",
+ "T\n",
+ "im\n",
+ "\n",
+ "e \n",
+ "(s\n",
+ "\n",
+ "ec\n",
+ ".)\n",
+ "\n",
+ "Queens\n",
+ "\n",
+ "N-Queens\n",
+ "\n",
+ "ACLP\n",
+ "SLDNFAC\n",
+ "\n",
+ "SEM\n",
+ "CLP\n",
+ "\n",
+ "smodels\n",
+ "\n",
+ "Fig. 2. N-queens: one solution\n",
+ "\n",
+ "0.01\n",
+ "\n",
+ "0.1\n",
+ "\n",
+ "1\n",
+ "\n",
+ "10\n",
+ "\n",
+ "100\n",
+ "\n",
+ "1000\n",
+ "\n",
+ "4 5 6 7 8 9 10 11 12 13\n",
+ "\n",
+ "T\n",
+ "im\n",
+ "\n",
+ "e \n",
+ "(s\n",
+ "\n",
+ "ec\n",
+ ".)\n",
+ "\n",
+ "Queens\n",
+ "\n",
+ "N-Queens (all solutions)\n",
+ "\n",
+ "ACLP\n",
+ "SLDNFAC\n",
+ "\n",
+ "SEM\n",
+ "CLP\n",
+ "\n",
+ "smodels\n",
+ "\n",
+ "Fig. 3. N-queens: all solutions\n",
+ "\n",
+ "\n",
+ "\n",
+ "appropriate heuristics. \n",
+ "Vectorizing 14 chunks from Document: https://datasetsgptsmartsearch.blob.core.windows.net/arxivcs/pdf/0506/0506005v1.pdf\n",
+ "Vectorizing 13 chunks from Document: https://datasetsgptsmartsearch.blob.core.windows.net/arxivcs/pdf/0408/0408056v1.pdf\n",
+ "Vectorizing 11 chunks from Document: https://datasetsgptsmartsearch.blob.core.windows.net/arxivcs/pdf/0310/0310042v1.pdf\n",
+ "Vectorizing 8 chunks from Document: https://datasetsgptsmartsearch.blob.core.windows.net/arxivcs/pdf/0003/0003026v1.pdf\n",
+ "Vectorizing 8 chunks from Document: https://datasetsgptsmartsearch.blob.core.windows.net/arxivcs/pdf/0402/0402019v1.pdf\n",
+ "Vectorizing 19 chunks from Document: https://datasetsgptsmartsearch.blob.core.windows.net/arxivcs/pdf/0404/0404053v1.pdf\n",
+ "CPU times: user 8.82 s, sys: 734 ms, total: 9.56 s\n",
+ "Wall time: 3h 52s\n"
]
}
],
@@ -1205,7 +1357,7 @@
},
{
"cell_type": "code",
- "execution_count": 16,
+ "execution_count": 26,
"id": "61098bb4-33da-4eb4-94cf-503587337aca",
"metadata": {},
"outputs": [
@@ -1242,7 +1394,7 @@
},
{
"cell_type": "code",
- "execution_count": 17,
+ "execution_count": 27,
"id": "7dfb9e39-2542-469d-8f64-4c0c26d79535",
"metadata": {},
"outputs": [
@@ -1265,7 +1417,7 @@
},
{
"cell_type": "code",
- "execution_count": 18,
+ "execution_count": 28,
"id": "880885fe-16bd-44bb-9556-7cb3d4989993",
"metadata": {},
"outputs": [
@@ -1275,9 +1427,9 @@
"text": [
"System prompt token count: 1669\n",
"Max Completion Token count: 1000\n",
- "Combined docs (context) token count: 2423\n",
+ "Combined docs (context) token count: 1206\n",
"--------\n",
- "Requested token count: 5092\n",
+ "Requested token count: 3875\n",
"Token limit for gpt-35-turbo : 4096\n",
"Chain Type selected: map_reduce\n"
]
@@ -1316,7 +1468,7 @@
},
{
"cell_type": "code",
- "execution_count": 19,
+ "execution_count": 29,
"id": "511273b3-256d-4e60-be72-ccd4a74cb885",
"metadata": {},
"outputs": [],
@@ -1333,7 +1485,7 @@
},
{
"cell_type": "code",
- "execution_count": 20,
+ "execution_count": 32,
"id": "b99a0c19-d48c-41e9-8d6c-6d9f13d29da3",
"metadata": {},
"outputs": [
@@ -1341,8 +1493,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "CPU times: user 49.6 ms, sys: 8.8 ms, total: 58.4 ms\n",
- "Wall time: 3.69 s\n"
+ "CPU times: user 75.9 ms, sys: 12.7 ms, total: 88.6 ms\n",
+ "Wall time: 3min 4s\n"
]
}
],
@@ -1354,14 +1506,14 @@
},
{
"cell_type": "code",
- "execution_count": 21,
+ "execution_count": 34,
"id": "37f7fa67-f67b-402e-89e3-266d5d6d21d8",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "CLP can refer to different things depending on the context. In the context of Consultation-Liaison Psychiatry, CLP stands for Consultation-Liaison Psychiatry[1]. In the context of Constraint Logic Programming, CLP stands for Constraint Logic Programming[2]."
+ "CLP can stand for different things depending on the context. It can refer to Consultation-Liaison Psychiatry (CLP)[1], double-layered core-like particles containing rotavirus VP2 and VP6[2], or cecal ligation and puncture (CLP)[3]."
],
"text/plain": [
""
@@ -1385,16 +1537,53 @@
},
{
"cell_type": "code",
- "execution_count": 23,
+ "execution_count": 35,
"id": "11345374-6420-4b36-b061-795d2a804c85",
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "Chunk Summary: Consultation-Liaison Psychiatry (CLP)"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Chunk Summary: CLP stands for double-layered core-like particles containing rotavirus VP2 and VP6."
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Chunk Summary: C23-treated mice (8 mg/kg BW) at 2 h after cecal ligation and puncture (CLP) had lower serum levels of LDH, ALT, IL-6, TNF-α, and IL-1β (reduced by ≥39%) at 20 h after CLP compared with mice treated with vehicle."
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
"source": [
- "# Uncomment if you want to inspect the results from map_reduce chain type, each top similar chunk summary (k=4 by default)\n",
+ "#Uncomment if you want to inspect the results from map_reduce chain type, each top similar chunk summary (k=4 by default)\n",
"\n",
- "# if chain_type == \"map_reduce\":\n",
- "# for step in response['intermediate_steps']:\n",
- "# display(HTML(\"Chunk Summary: \" + step))"
+ "if chain_type == \"map_reduce\":\n",
+ " for step in response['intermediate_steps']:\n",
+ " display(HTML(\"Chunk Summary: \" + step))"
]
},
{
@@ -1422,9 +1611,9 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3.10 - SDK v2",
+ "display_name": ".venv",
"language": "python",
- "name": "python310-sdkv2"
+ "name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -1436,7 +1625,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.11"
+ "version": "3.11.5"
}
},
"nbformat": 4,
diff --git a/04-Complex-Docs.ipynb b/04-Complex-Docs.ipynb
index f7b52297..0075282b 100644
--- a/04-Complex-Docs.ipynb
+++ b/04-Complex-Docs.ipynb
@@ -24,7 +24,7 @@
},
{
"cell_type": "code",
- "execution_count": 22,
+ "execution_count": 2,
"id": "15f6044e-463f-4988-bc46-a3c3d641c15c",
"metadata": {},
"outputs": [],
@@ -76,7 +76,7 @@
},
{
"cell_type": "code",
- "execution_count": 2,
+ "execution_count": 3,
"id": "331692ba-b68e-4b99-9bae-5057da9a389d",
"metadata": {},
"outputs": [],
@@ -87,7 +87,7 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": 4,
"id": "594ff0d4-56e3-4bed-843d-28c7a092069b",
"metadata": {},
"outputs": [],
@@ -115,7 +115,7 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": 5,
"id": "0999e24b-6a75-4fa1-9a5f-426cf0f0bdba",
"metadata": {},
"outputs": [],
@@ -137,7 +137,7 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 6,
"id": "3554f0b7-fee8-4446-a155-5d22dc0f0888",
"metadata": {},
"outputs": [
@@ -145,7 +145,14 @@
"name": "stderr",
"output_type": "stream",
"text": [
- "100%|██████████| 5/5 [00:02<00:00, 2.02it/s]\n"
+ " 0%| | 0/5 [00:00, ?it/s]"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "100%|██████████| 5/5 [00:20<00:00, 4.06s/it]\n"
]
}
],
@@ -174,7 +181,7 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": 7,
"id": "c1c63a2f-7a53-4346-8a1f-483cfd159d34",
"metadata": {},
"outputs": [
@@ -184,27 +191,27 @@
"text": [
"Extracting Text from Azure_Cognitive_Search_Documentation.pdf ...\n",
"Extracting text using PyPDF\n",
- "Parsing took: 32.466733 seconds\n",
+ "Parsing took: 16.425543 seconds\n",
"Azure_Cognitive_Search_Documentation.pdf contained 1947 pages\n",
"\n",
"Extracting Text from Boundaries_When_to_Say_Yes_How_to_Say_No_to_Take_Control_of_Your_Life.pdf ...\n",
"Extracting text using PyPDF\n",
- "Parsing took: 1.773100 seconds\n",
+ "Parsing took: 0.848503 seconds\n",
"Boundaries_When_to_Say_Yes_How_to_Say_No_to_Take_Control_of_Your_Life.pdf contained 357 pages\n",
"\n",
"Extracting Text from Fundamentals_of_Physics_Textbook.pdf ...\n",
"Extracting text using PyPDF\n",
- "Parsing took: 96.595367 seconds\n",
+ "Parsing took: 47.607230 seconds\n",
"Fundamentals_of_Physics_Textbook.pdf contained 1450 pages\n",
"\n",
"Extracting Text from Made_To_Stick.pdf ...\n",
"Extracting text using PyPDF\n",
- "Parsing took: 7.461382 seconds\n",
+ "Parsing took: 3.510144 seconds\n",
"Made_To_Stick.pdf contained 225 pages\n",
"\n",
"Extracting Text from Pere_Riche_Pere_Pauvre.pdf ...\n",
"Extracting text using PyPDF\n",
- "Parsing took: 0.995148 seconds\n",
+ "Parsing took: 0.402579 seconds\n",
"Pere_Riche_Pere_Pauvre.pdf contained 225 pages\n",
"\n"
]
@@ -241,7 +248,7 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 8,
"id": "f2a5d62f-b664-4662-a6c9-a1eb2a3c5e11",
"metadata": {},
"outputs": [
@@ -253,22 +260,24 @@
" chunk text: 3. In Description, choose a field that provides details that might help someone ...\n",
"\n",
"Boundaries_When_to_Say_Yes_How_to_Say_No_to_Take_Control_of_Your_Life.pdf \n",
- " chunk text: 14\n",
- "11:59 A.M.\n",
- "The rest of Sherrie’s morning proceeded fairly well. A tal-\n",
- "ented ...\n",
+ " chunk text: 36\n",
+ "Consequences\n",
+ "Trespassing on other people’s property carries consequences.\n",
+ "“No ...\n",
"\n",
"Fundamentals_of_Physics_Textbook.pdf \n",
- " chunk text: 192-2INSTANTANEOUS VELOCITY AND SPEED\n",
- "Figure 2-6(a) The x(t) curve for an elevat ...\n",
+ " chunk text: CHAPTER 2Motion Along a Straight Line2-1POSITION, DISPLACEMENT, AND AVERAGE VELO ...\n",
"\n",
"Made_To_Stick.pdf \n",
- " chunk text: the artistic feel of the movie will change. When st ars are hired to play \n",
- "the p ...\n",
+ " chunk text: or the communication line might be lost completely— a common oc- \n",
+ "currence durin ...\n",
"\n",
"Pere_Riche_Pere_Pauvre.pdf \n",
- " chunk text: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n",
- "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ...\n",
+ " chunk text: ~~~~~~~~~~~~~\n",
+ "~~~~~~~~~~~~~~\n",
+ "~~~~~~~~~~~~~~~~~~~~~~~~~\n",
+ "~~~~~~~~~~~~~~~~~\n",
+ "~~~~~~~ ...\n",
"\n"
]
}
@@ -289,7 +298,7 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": 9,
"id": "801c6bc2-467c-4418-aa7e-ef89a1e20e1c",
"metadata": {},
"outputs": [
@@ -298,8 +307,8 @@
"output_type": "stream",
"text": [
"Extracting text using Azure Document Intelligence\n",
- "CPU times: user 10.6 s, sys: 160 ms, total: 10.8 s\n",
- "Wall time: 42.4 s\n"
+ "CPU times: user 5.55 s, sys: 144 ms, total: 5.69 s\n",
+ "Wall time: 1min 6s\n"
]
}
],
@@ -324,7 +333,7 @@
},
{
"cell_type": "code",
- "execution_count": 10,
+ "execution_count": 11,
"id": "97f9c5bb-c44b-4a4d-9780-591f9f8d128a",
"metadata": {},
"outputs": [
@@ -333,7 +342,7 @@
"output_type": "stream",
"text": [
"Pere_Riche_Pere_Pauvre.pdf \n",
- " chunk text: règles différentes. Les employés sont perdants; les patrons et les investisseurs ...\n",
+ " chunk text: première entreprise, maintenant défunte. Tout en nettoyant, nous décidâmes quand ...\n",
"\n"
]
}
@@ -363,7 +372,7 @@
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": 12,
"id": "7d46e7c5-49c4-40f3-bb2d-79a9afeab4b1",
"metadata": {},
"outputs": [],
@@ -373,7 +382,7 @@
},
{
"cell_type": "code",
- "execution_count": 12,
+ "execution_count": 13,
"id": "1b07e84b-d306-4bc9-9124-e64f252dd7b2",
"metadata": {},
"outputs": [],
@@ -386,7 +395,7 @@
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": 14,
"id": "2df4db6b-969b-4b91-963f-9334e17a4e3c",
"metadata": {},
"outputs": [
@@ -475,82 +484,10 @@
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": null,
"id": "f5c8aa55-1b60-4057-93db-0d4a89993a57",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Uploading chunks from Azure_Cognitive_Search_Documentation.pdf\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "100%|██████████| 357/357 [01:36<00:00, 3.71it/s]s]\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Uploading chunks from Fundamentals_of_Physics_Textbook.pdf\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "100%|██████████| 1450/1450 [07:10<00:00, 3.37it/s]\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Uploading chunks from Made_To_Stick.pdf\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "100%|██████████| 225/225 [01:03<00:00, 3.56it/s]\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Uploading chunks from Pere_Riche_Pere_Pauvre.pdf\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "100%|██████████| 225/225 [01:02<00:00, 3.61it/s]"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "CPU times: user 3min 6s, sys: 4.32 s, total: 3min 10s\n",
- "Wall time: 19min 29s\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"%%time\n",
"for bookname,bookmap in book_pages_map.items():\n",
@@ -596,7 +533,7 @@
},
{
"cell_type": "code",
- "execution_count": 14,
+ "execution_count": 16,
"id": "8b408798-5527-44ca-9dba-cad2ee726aca",
"metadata": {},
"outputs": [],
@@ -611,7 +548,7 @@
},
{
"cell_type": "code",
- "execution_count": 15,
+ "execution_count": 17,
"id": "1b182ade-0ddd-47a1-b1eb-2cbf435c317f",
"metadata": {},
"outputs": [],
@@ -637,7 +574,17 @@
},
{
"cell_type": "code",
- "execution_count": 16,
+ "execution_count": null,
+ "id": "c12303dc",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ordered_results"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
"id": "410ff796-dab1-4817-a3a5-82eeff6c0c57",
"metadata": {},
"outputs": [],
@@ -648,7 +595,7 @@
},
{
"cell_type": "code",
- "execution_count": 19,
+ "execution_count": 20,
"id": "744aba20-b3fd-4286-8d58-2ddfccc77734",
"metadata": {},
"outputs": [
@@ -671,7 +618,7 @@
},
{
"cell_type": "code",
- "execution_count": 20,
+ "execution_count": 22,
"id": "db1c4d56-8c2d-47d6-8717-810f156f1c0c",
"metadata": {},
"outputs": [
@@ -714,10 +661,21 @@
},
{
"cell_type": "code",
- "execution_count": 23,
+ "execution_count": 26,
"id": "62cf3a3f-2b4d-4806-8b92-eb982c52b0cd",
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "'stuff'"
+ ]
+ },
+ "execution_count": 26,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
"source": [
"if chain_type == \"stuff\":\n",
" chain = load_qa_with_sources_chain(llm, chain_type=chain_type, \n",
@@ -726,12 +684,13 @@
" chain = load_qa_with_sources_chain(llm, chain_type=chain_type, \n",
" question_prompt=COMBINE_QUESTION_PROMPT,\n",
" combine_prompt=COMBINE_PROMPT,\n",
- " return_intermediate_steps=True)"
+ " return_intermediate_steps=True)\n",
+ "chain_type"
]
},
{
"cell_type": "code",
- "execution_count": 24,
+ "execution_count": 29,
"id": "3b412c56-650f-4ca4-a868-9954f83679fa",
"metadata": {},
"outputs": [
@@ -739,8 +698,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "CPU times: user 21.9 ms, sys: 0 ns, total: 21.9 ms\n",
- "Wall time: 7.75 s\n"
+ "CPU times: user 11.4 ms, sys: 4.85 ms, total: 16.2 ms\n",
+ "Wall time: 4.78 s\n"
]
}
],
@@ -752,56 +711,52 @@
},
{
"cell_type": "code",
- "execution_count": 25,
+ "execution_count": 30,
"id": "63f07b08-87bd-4518-b2f2-03ee1096f59f",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "To push documents with vectors to an index using the Python SDK, you can use the following code example:\n",
+ "To push documents with vectors to an index using the Python SDK, you can follow these steps:\n",
+ "\n",
+ "1. Install the necessary libraries by running the following command:\n",
+ "```\n",
+ "%pip install azure-search-documents --pre\n",
+ "```\n",
"\n",
+ "2. Import the required modules and set up your clients:\n",
"```python\n",
+ "import os\n",
"from azure.core.credentials import AzureKeyCredential\n",
"from azure.search.documents import SearchClient\n",
"\n",
- "# Set up the connection\n",
- "endpoint = \"your_search_service_endpoint\"\n",
"index_name = \"your_index_name\"\n",
- "api_key = \"your_api_key\"\n",
- "\n",
- "# Create the search client\n",
- "search_client = SearchClient(endpoint=endpoint, index_name=index_name, credential=AzureKeyCredential(api_key))\n",
- "\n",
- "# Define your documents with vectors\n",
- "documents = [\n",
- " {\n",
- " \"@search.action\": \"upload\",\n",
- " \"id\": \"1\",\n",
- " \"title\": \"Document 1\",\n",
- " \"vector\": [0.1, 0.2, 0.3]\n",
- " },\n",
- " {\n",
- " \"@search.action\": \"upload\",\n",
- " \"id\": \"2\",\n",
- " \"title\": \"Document 2\",\n",
- " \"vector\": [0.4, 0.5, 0.6]\n",
- " },\n",
- " # Add more documents as needed\n",
- "]\n",
+ "endpoint = os.environ[\"SEARCH_ENDPOINT\"]\n",
+ "key = os.environ[\"SEARCH_API_KEY\"]\n",
"\n",
- "# Upload the documents to the index\n",
- "result = search_client.upload_documents(documents=documents)\n",
+ "search_client = SearchClient(endpoint, index_name, AzureKeyCredential(key))\n",
+ "```\n",
"\n",
- "# Check if the upload succeeded\n",
- "print(\"Upload of new documents succeeded:\", result[0].succeeded)\n",
+ "3. Create a document to be uploaded to the index:\n",
+ "```python\n",
+ "document = {\n",
+ " \"@search.action\": \"upload\",\n",
+ " \"id\": \"1\",\n",
+ " \"name\": \"Document 1\",\n",
+ " \"vector\": [0.1, 0.2, 0.3]\n",
+ "}\n",
"```\n",
"\n",
- "This code creates a search client using the endpoint, index name, and API key. It then defines the documents to be uploaded, including a unique ID, a title, and a vector. The `@search.action` field specifies that the action is an upload. Finally, the `upload_documents` method is called to push the documents to the index.\n",
+ "4. Upload the document to the index:\n",
+ "```python\n",
+ "result = search_client.upload_documents(documents=[document])\n",
+ "print(\"Upload of new document succeeded: {}\".format(result[0].succeeded))\n",
+ "```\n",
"\n",
- "Please note that you need to replace `\"your_search_service_endpoint\"`, `\"your_index_name\"`, and `\"your_api_key\"` with your own values.\n",
+ "Make sure to replace \"your_index_name\" with the actual name of your index. Also, ensure that you have the necessary credentials and endpoint configured.\n",
"\n",
- "[1]Source"
+ "[1]Source"
],
"text/plain": [
""
@@ -853,9 +808,9 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3.10 - SDK v2",
+ "display_name": ".venv",
"language": "python",
- "name": "python310-sdkv2"
+ "name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -867,7 +822,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.11"
+ "version": "3.11.5"
}
},
"nbformat": 4,
diff --git a/05-Adding_Memory.ipynb b/05-Adding_Memory.ipynb
index 6377959c..d41a0aad 100644
--- a/05-Adding_Memory.ipynb
+++ b/05-Adding_Memory.ipynb
@@ -24,7 +24,7 @@
},
{
"cell_type": "code",
- "execution_count": 2,
+ "execution_count": 14,
"id": "733c782e-204c-47d0-8dae-c9df7091ab23",
"metadata": {},
"outputs": [],
@@ -69,7 +69,7 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": 15,
"id": "6bc63c55-a57d-49a7-b6c7-0f18bca8199e",
"metadata": {},
"outputs": [],
@@ -89,7 +89,7 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": 16,
"id": "3eef5dc9-8b80-4085-980c-865fa41fa1f6",
"metadata": {},
"outputs": [],
@@ -100,7 +100,7 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 17,
"id": "a00181d5-bd76-4ce4-a256-75ac5b58c60f",
"metadata": {},
"outputs": [],
@@ -114,7 +114,7 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": 18,
"id": "9502d0f1-fddf-40d1-95d2-a1461dcc498a",
"metadata": {},
"outputs": [],
@@ -130,32 +130,28 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 19,
"id": "c5c9903e-15c7-4e05-87a1-58e5a7917ba2",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "1. Autonomous Vehicles: Reinforcement learning can be used to train self-driving cars to make decisions on the road, such as navigating traffic, following traffic rules, and avoiding accidents.\n",
+ "1. Autonomous vehicles: Reinforcement learning can be used to train self-driving cars to make decisions and navigate safely in real-world traffic environments.\n",
"\n",
- "2. Robotics: Reinforcement learning can be applied to train robots to perform complex tasks, such as grasping objects, manipulating tools, or even playing sports.\n",
+ "2. Robotics: Reinforcement learning can be used to train robots to perform complex tasks such as grasping and manipulation in unstructured environments.\n",
"\n",
- "3. Game Playing: Reinforcement learning has been successfully used to train agents to play various games, including board games like chess and Go, as well as video games like Dota 2 and StarCraft II.\n",
+ "3. Game playing: Reinforcement learning has been used to train agents to play and master complex games such as chess, Go, and video games.\n",
"\n",
- "4. Finance: Reinforcement learning can be used in the field of finance to optimize trading strategies, portfolio management, and risk management, by learning from historical data and making decisions based on market conditions.\n",
+ "4. Personalized recommendation systems: Reinforcement learning can be used to optimize and personalize recommendations for users based on their preferences and behavior.\n",
"\n",
- "5. Healthcare: In healthcare, reinforcement learning can be used to optimize treatment plans, such as personalized medication dosage, disease diagnosis, and patient monitoring.\n",
+ "5. Healthcare: Reinforcement learning can be used to optimize treatment plans and drug dosages for patients based on their individual responses and medical history.\n",
"\n",
- "6. Energy Management: Reinforcement learning can be employed to optimize energy consumption and management in smart grids, by learning and adapting to changing energy demands and supply conditions.\n",
+ "6. Energy management: Reinforcement learning can be used to optimize energy usage in smart grids and buildings, leading to more efficient and sustainable energy consumption.\n",
"\n",
- "7. Advertising: Reinforcement learning can be used in online advertising to optimize the selection and display of ads to users, by learning from user feedback and maximizing the click-through rates or conversions.\n",
+ "7. Finance: Reinforcement learning can be used to optimize trading strategies and portfolio management in financial markets.\n",
"\n",
- "8. Recommendation Systems: Reinforcement learning can be applied to build personalized recommendation systems, by learning user preferences and optimizing the selection of recommended items or content.\n",
- "\n",
- "9. Industrial Process Optimization: Reinforcement learning can be used to optimize complex industrial processes, such as chemical production, supply chain management, and resource allocation, by learning from data and maximizing efficiency.\n",
- "\n",
- "10. Natural Language Processing: Reinforcement learning can be utilized to train conversational agents or chatbots to interact with users and improve their responses over time, by learning from user feedback and maximizing user satisfaction."
+ "8. Industrial automation: Reinforcement learning can be used to optimize and automate processes in manufacturing and logistics, leading to improved efficiency and cost savings."
],
"text/plain": [
""
@@ -173,17 +169,17 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": 20,
"id": "99acaf3c-ce68-4b87-b24a-6065b15ff9a8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
- "\"I'm sorry, but as an AI language model, I don't have the capability to recall previous conversations. However, if you provide some context or specific details about our conversation, I can try to help you summarize the main points.\""
+ "'1. We discussed the current project status and the challenges we are facing.\\n2. We talked about potential solutions and next steps to address the challenges.\\n3. We reviewed the timeline and deadlines for the project.\\n4. We discussed the roles and responsibilities of team members.\\n5. We agreed on the need for better communication and collaboration among team members.\\n6. We talked about the resources and support needed to successfully complete the project.'"
]
},
- "execution_count": 8,
+ "execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
@@ -206,7 +202,7 @@
},
{
"cell_type": "code",
- "execution_count": 9,
+ "execution_count": 21,
"id": "0946ce71-6285-432e-b011-9c2dc1ba7b8a",
"metadata": {},
"outputs": [],
@@ -224,7 +220,7 @@
},
{
"cell_type": "code",
- "execution_count": 10,
+ "execution_count": 22,
"id": "6d088e51-e5eb-4143-b87d-b2be429eb864",
"metadata": {},
"outputs": [],
@@ -237,21 +233,17 @@
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": 23,
"id": "d99e34ad-5539-44dd-b080-3ad05efd2f01",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "- Reinforcement learning can be used in various fields such as autonomous vehicles, robotics, game playing, finance, healthcare, energy management, advertising, recommendation systems, industrial process optimization, and natural language processing.\n",
- "- It can be applied to train self-driving cars, robots, and game-playing agents.\n",
- "- In finance, it can optimize trading strategies, portfolio management, and risk management.\n",
- "- In healthcare, it can optimize treatment plans and patient monitoring.\n",
- "- It can optimize energy consumption and management in smart grids.\n",
- "- It can optimize online advertising and personalized recommendation systems.\n",
- "- It can optimize industrial processes and resource allocation.\n",
- "- It can train conversational agents to interact with users and improve responses over time."
+ "1. Reinforcement learning can be used in autonomous vehicles, robotics, game playing, personalized recommendation systems, healthcare, energy management, finance, and industrial automation.\n",
+ "2. It can train self-driving cars to navigate safely, robots to perform complex tasks, and agents to master complex games.\n",
+ "3. It can optimize personalized recommendations, treatment plans, energy usage, trading strategies, and industrial processes.\n",
+ "4. It leads to improved efficiency, cost savings, and sustainable energy consumption."
],
"text/plain": [
""
@@ -283,7 +275,7 @@
},
{
"cell_type": "code",
- "execution_count": 29,
+ "execution_count": 24,
"id": "ba257e86-fd90-4a51-a72d-27000913e8c2",
"metadata": {},
"outputs": [],
@@ -297,7 +289,7 @@
},
{
"cell_type": "code",
- "execution_count": 30,
+ "execution_count": 25,
"id": "ef9f459b-e8b8-40b9-a94d-80c079968594",
"metadata": {},
"outputs": [],
@@ -311,7 +303,7 @@
},
{
"cell_type": "code",
- "execution_count": 31,
+ "execution_count": 26,
"id": "b01852c2-6192-496c-adff-4270f9380469",
"metadata": {},
"outputs": [
@@ -320,8 +312,8 @@
"output_type": "stream",
"text": [
"Number of results: 5\n",
- "CPU times: user 827 ms, sys: 27.9 ms, total: 855 ms\n",
- "Wall time: 4.58 s\n"
+ "CPU times: user 12.8 s, sys: 1.08 s, total: 13.9 s\n",
+ "Wall time: 4h 22min 50s\n"
]
}
],
@@ -354,7 +346,7 @@
},
{
"cell_type": "code",
- "execution_count": 33,
+ "execution_count": 29,
"id": "9b2a3595-c3b7-4376-b9c5-0db7a42b3ee4",
"metadata": {},
"outputs": [
@@ -377,7 +369,7 @@
},
{
"cell_type": "code",
- "execution_count": 34,
+ "execution_count": 30,
"id": "c26d7540-feb8-4581-849e-003f4bf2a601",
"metadata": {},
"outputs": [
@@ -387,9 +379,9 @@
"text": [
"System prompt token count: 2464\n",
"Max Completion Token count: 2000\n",
- "Combined docs (context) token count: 2123\n",
+ "Combined docs (context) token count: 1821\n",
"--------\n",
- "Requested token count: 6587\n",
+ "Requested token count: 6285\n",
"Token limit for gpt-35-turbo-16k : 16384\n",
"Chain Type selected: stuff\n"
]
@@ -420,18 +412,22 @@
},
{
"cell_type": "code",
- "execution_count": 35,
+ "execution_count": 31,
"id": "3ce6efa9-2b8f-4810-904d-5986b4ae0372",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "Reinforcement learning can be used in various use cases such as:\n",
+ "Reinforcement learning can be used in various use cases, including:\n",
+ "\n",
+ "1. Learning prevention strategies in the context of pandemic influenza by automatically learning mitigation policies in complex epidemiological models with a large state space[1].\n",
"\n",
- "1. Learning prevention strategies in the context of pandemic influenza, specifically in constructing epidemiological models and controlling districts in a community[1].\n",
- "2. Personalized music recommendation based on reinforcement learning, capturing changes in listeners' preferences and recommending song sequences[2].\n",
- "3. Job scheduling in data centers using deep reinforcement learning, optimizing the allocation of resources over time and space[3].\n",
+ "2. Computing lockdown decisions for individual cities or regions, while balancing health and economic considerations during the Covid-19 pandemic[2].\n",
+ "\n",
+ "3. Learning sparse reward tasks by combining self-imitation learning with an exploration bonus to enhance exploration and exploitation in reinforcement learning[3].\n",
+ "\n",
+ "4. Modeling epidemics and understanding the consequences of individual decisions on the spread of diseases, and using reinforcement learning to find optimal decisions for individual agents in the framework of game theory and multi-agent reinforcement learning[4].\n",
"\n",
"These are just a few examples of how reinforcement learning can be applied in different domains."
],
@@ -446,8 +442,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "CPU times: user 21 ms, sys: 931 µs, total: 21.9 ms\n",
- "Wall time: 8.05 s\n"
+ "CPU times: user 18 ms, sys: 15.7 ms, total: 33.7 ms\n",
+ "Wall time: 8.79 s\n"
]
}
],
@@ -468,14 +464,31 @@
},
{
"cell_type": "code",
- "execution_count": 36,
+ "execution_count": 32,
"id": "5cf5b323-3b9c-479b-8502-acfc4f7915dd",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "I'm sorry, but I cannot provide the main points of our conversation as it is not mentioned in the provided extracted parts."
+ "The main points of our conversation are as follows:\n",
+ "\n",
+ "- Epidemics of infectious diseases pose a significant threat to public health and global economies.\n",
+ "- Prevention strategies for epidemics are challenging due to the non-linear and complex nature of epidemics.\n",
+ "- A deep reinforcement learning approach was investigated to automatically learn prevention strategies for pandemic influenza.\n",
+ "- A new epidemiological meta-population model was constructed to capture the infection process of pandemic influenza in Great Britain.\n",
+ "- The model balances complexity and computational efficiency to enable the use of reinforcement learning techniques.\n",
+ "- The 'Proximal Policy Optimization' algorithm was evaluated for learning prevention strategies in a single district of the epidemiological model.\n",
+ "- An experiment was conducted to learn a joint policy to control 11 tightly coupled districts, demonstrating the potential for collaboration between districts in designing prevention strategies.\n",
+ "\n",
+ "References:\n",
+ "- [1][1]: \"Deep Reinforcement Learning for Automatically Learning Prevention Strategies in the Context of Pandemic Influenza\"\n",
+ "- [2][2]: \"Quantitative Computation of Lockdowns for Individual Cities or Regions in the Context of Covid-19\"\n",
+ "- [3][3]: \"Explore-then-Exploit: A Framework for Reinforcement Learning in Sparse Reward Tasks\"\n",
+ "- [4][4]: \"Microscopic Approach to Model Epidemics and Optimal Behavior in the Context of Disease Spread\"\n",
+ "- [5][5]: \"Reinforcement Learning: A Survey\"\n",
+ "\n",
+ "Is there anything else I can help you with?"
],
"text/plain": [
""
@@ -506,22 +519,30 @@
},
{
"cell_type": "code",
- "execution_count": 37,
+ "execution_count": 33,
"id": "d98b876e-d264-48ae-b5ed-9801d6a9152b",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "Reinforcement learning has various use cases in different domains. Here are a few examples:\n",
+ "Reinforcement learning has several use cases in various fields. Here are some examples:\n",
"\n",
- "1. Prevention strategies for epidemics: In the context of pandemic influenza, deep reinforcement learning can be used to automatically learn prevention strategies. By constructing epidemiological models and using reinforcement learning techniques, policies can be learned to control and mitigate the spread of infectious diseases within a community of districts[1].\n",
+ "1. **Epidemiology**: Reinforcement learning can be used to learn prevention strategies for infectious diseases like pandemic influenza. By constructing epidemiological models and using reinforcement learning algorithms, researchers can automatically learn effective prevention strategies for controlling the spread of diseases in specific districts or communities[1][2][4].\n",
"\n",
- "2. Personalized music recommendation: Reinforcement learning can be used to improve music recommendation systems by considering the simulation of the interaction process between listeners and songs. By continuously updating the model based on listeners' preferences, reinforcement learning algorithms can recommend song sequences that better match the listeners' preferences[2].\n",
+ "2. **COVID-19 Policies**: Reinforcement learning can be used to compute lockdown decisions for individual cities or regions during the COVID-19 pandemic. By balancing health and economic considerations, reinforcement learning algorithms can automatically learn policies for implementing lockdowns based on disease parameters and population characteristics[2].\n",
"\n",
- "3. Job scheduling in data centers: Reinforcement learning can be applied to job scheduling in data centers to efficiently allocate multi-dimensional resources over time and space. By using Advantage Actor-Critic (A2C) deep reinforcement learning, scheduling policies can be learned to optimize job allocation and improve performance[3].\n",
+ "3. **Sparse Reward Tasks**: Reinforcement learning algorithms, such as self-imitation learning and exploration bonuses, can be used to tackle sparse reward tasks. These algorithms encourage efficient exploitation and exploration, reducing the sample complexity of learning tasks[3].\n",
"\n",
- "These are just a few examples of how reinforcement learning can be applied in different domains. The key idea is to use an agent that learns from interactions with an environment to make decisions and optimize certain objectives. By using feedback in the form of rewards, the agent can learn to take actions that maximize its long-term cumulative reward."
+ "These are just a few examples of how reinforcement learning can be applied in different domains. It offers a powerful framework for learning optimal strategies and making decisions based on feedback and rewards. \n",
+ "\n",
+ "Let me know if you need more information or have any other questions!\n",
+ "\n",
+ "References:\n",
+ "[1] Source: [1]\n",
+ "[2] Source: [2]\n",
+ "[3] Source: [3]\n",
+ "\n"
],
"text/plain": [
""
@@ -542,7 +563,7 @@
},
{
"cell_type": "code",
- "execution_count": 38,
+ "execution_count": 34,
"id": "bf28927b-d9ee-4412-bb07-13e055e832a7",
"metadata": {},
"outputs": [
@@ -551,21 +572,20 @@
"text/markdown": [
"Based on our conversation, here are the main points:\n",
"\n",
- "1. Reinforcement learning has various use cases in different domains.\n",
- "2. In the context of pandemic influenza, deep reinforcement learning can be used to automatically learn prevention strategies.\n",
- "3. Reinforcement learning can be applied to improve music recommendation systems by considering the simulation of the interaction process between listeners and songs.\n",
- "4. Reinforcement learning can be used for job scheduling in data centers to efficiently allocate multi-dimensional resources over time and space.\n",
- "5. The key idea of reinforcement learning is to use an agent that learns from interactions with an environment to make decisions and optimize certain objectives.\n",
- "6. By using feedback in the form of rewards, the agent can learn to take actions that maximize its long-term cumulative reward.\n",
+ "1. Reinforcement learning has several use cases in various fields, including epidemiology, COVID-19 policies, and sparse reward tasks[1][2][3].\n",
"\n",
- "These points summarize the various use cases and the fundamental concept of reinforcement learning that we discussed.\n",
+ "2. In epidemiology, reinforcement learning can be used to learn prevention strategies for infectious diseases like pandemic influenza by constructing epidemiological models and using reinforcement learning algorithms[1].\n",
"\n",
- "References:\n",
- "[1][1] \n",
- "[2][2] \n",
- "[3][3]\n",
+ "3. For COVID-19 policies, reinforcement learning can be used to compute lockdown decisions for individual cities or regions, balancing health and economic considerations[2].\n",
+ "\n",
+ "4. Reinforcement learning algorithms, such as self-imitation learning and exploration bonuses, can be used to tackle sparse reward tasks and reduce the sample complexity of learning tasks[3].\n",
"\n",
- "Is there anything else I can assist you with?"
+ "Reinforcement learning offers a powerful framework for learning optimal strategies and making decisions based on feedback and rewards. Let me know if you need more information or have any other questions!\n",
+ "\n",
+ "References:\n",
+ "[1] Source: [1]\n",
+ "[2] Source: [2]\n",
+ "[3] Source: [3]"
],
"text/plain": [
""
@@ -584,14 +604,29 @@
},
{
"cell_type": "code",
- "execution_count": 39,
+ "execution_count": 35,
"id": "3830b0b8-0ca2-4d0a-9747-f6273368002b",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "I'm sorry, but I couldn't find any relevant information in the extracted parts that directly answers your question about the main points of our conversation. Is there anything else I can assist you with?"
+ "Based on our conversation, here are the main points:\n",
+ "\n",
+ "1. Reinforcement learning has several use cases in various fields, including epidemiology, COVID-19 policies, and sparse reward tasks[1][2][3].\n",
+ "\n",
+ "2. In epidemiology, reinforcement learning can be used to learn prevention strategies for infectious diseases like pandemic influenza by constructing epidemiological models and using reinforcement learning algorithms[1].\n",
+ "\n",
+ "3. For COVID-19 policies, reinforcement learning can be used to compute lockdown decisions for individual cities or regions, balancing health and economic considerations[2].\n",
+ "\n",
+ "4. Reinforcement learning algorithms, such as self-imitation learning and exploration bonuses, can be used to tackle sparse reward tasks and reduce the sample complexity of learning tasks[3].\n",
+ "\n",
+ "Reinforcement learning offers a powerful framework for learning optimal strategies and making decisions based on feedback and rewards. Let me know if you need more information or have any other questions!\n",
+ "\n",
+ "References:\n",
+ "[1] Source: [1]\n",
+ "[2] Source: [2]\n",
+ "[3] Source: [3]"
],
"text/plain": [
""
@@ -620,17 +655,17 @@
},
{
"cell_type": "code",
- "execution_count": 40,
+ "execution_count": 36,
"id": "1279692c-7eb0-4300-8a66-c7025f02c318",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
- "'Human: Tell me some use cases for reinforcement learning\\nAI: Reinforcement learning has various use cases in different domains. Here are a few examples:\\n\\n1. Prevention strategies for epidemics: In the context of pandemic influenza, deep reinforcement learning can be used to automatically learn prevention strategies. By constructing epidemiological models and using reinforcement learning techniques, policies can be learned to control and mitigate the spread of infectious diseases within a community of districts[1].\\n\\n2. Personalized music recommendation: Reinforcement learning can be used to improve music recommendation systems by considering the simulation of the interaction process between listeners and songs. By continuously updating the model based on listeners\\' preferences, reinforcement learning algorithms can recommend song sequences that better match the listeners\\' preferences[2].\\n\\n3. Job scheduling in data centers: Reinforcement learning can be applied to job scheduling in data centers to efficiently allocate multi-dimensional resources over time and space. By using Advantage Actor-Critic (A2C) deep reinforcement learning, scheduling policies can be learned to optimize job allocation and improve performance[3].\\n\\nThese are just a few examples of how reinforcement learning can be applied in different domains. The key idea is to use an agent that learns from interactions with an environment to make decisions and optimize certain objectives. By using feedback in the form of rewards, the agent can learn to take actions that maximize its long-term cumulative reward.\\nHuman: Give me the main points of our conversation\\nAI: Based on our conversation, here are the main points:\\n\\n1. Reinforcement learning has various use cases in different domains.\\n2. In the context of pandemic influenza, deep reinforcement learning can be used to automatically learn prevention strategies.\\n3. Reinforcement learning can be applied to improve music recommendation systems by considering the simulation of the interaction process between listeners and songs.\\n4. Reinforcement learning can be used for job scheduling in data centers to efficiently allocate multi-dimensional resources over time and space.\\n5. The key idea of reinforcement learning is to use an agent that learns from interactions with an environment to make decisions and optimize certain objectives.\\n6. By using feedback in the form of rewards, the agent can learn to take actions that maximize its long-term cumulative reward.\\n\\nThese points summarize the various use cases and the fundamental concept of reinforcement learning that we discussed.\\n\\nReferences:\\n[1][1] \\n[2][2] \\n[3][3]\\n\\nIs there anything else I can assist you with?\\nHuman: Thank you\\nAI: I\\'m sorry, but I couldn\\'t find any relevant information in the extracted parts that directly answers your question about the main points of our conversation. Is there anything else I can assist you with?'"
+ "'Human: Tell me some use cases for reinforcement learning\\nAI: Reinforcement learning has several use cases in various fields. Here are some examples:\\n\\n1. **Epidemiology**: Reinforcement learning can be used to learn prevention strategies for infectious diseases like pandemic influenza. By constructing epidemiological models and using reinforcement learning algorithms, researchers can automatically learn effective prevention strategies for controlling the spread of diseases in specific districts or communities[1][2][4].\\n\\n2. **COVID-19 Policies**: Reinforcement learning can be used to compute lockdown decisions for individual cities or regions during the COVID-19 pandemic. By balancing health and economic considerations, reinforcement learning algorithms can automatically learn policies for implementing lockdowns based on disease parameters and population characteristics[2].\\n\\n3. **Sparse Reward Tasks**: Reinforcement learning algorithms, such as self-imitation learning and exploration bonuses, can be used to tackle sparse reward tasks. These algorithms encourage efficient exploitation and exploration, reducing the sample complexity of learning tasks[3].\\n\\nThese are just a few examples of how reinforcement learning can be applied in different domains. It offers a powerful framework for learning optimal strategies and making decisions based on feedback and rewards. \\n\\nLet me know if you need more information or have any other questions!\\n\\nReferences:\\n[1] Source: [1]\\n[2] Source: [2]\\n[3] Source: [3]\\n\\n\\nHuman: Give me the main points of our conversation\\nAI: Based on our conversation, here are the main points:\\n\\n1. Reinforcement learning has several use cases in various fields, including epidemiology, COVID-19 policies, and sparse reward tasks[1][2][3].\\n\\n2. In epidemiology, reinforcement learning can be used to learn prevention strategies for infectious diseases like pandemic influenza by constructing epidemiological models and using reinforcement learning algorithms[1].\\n\\n3. For COVID-19 policies, reinforcement learning can be used to compute lockdown decisions for individual cities or regions, balancing health and economic considerations[2].\\n\\n4. Reinforcement learning algorithms, such as self-imitation learning and exploration bonuses, can be used to tackle sparse reward tasks and reduce the sample complexity of learning tasks[3].\\n\\nReinforcement learning offers a powerful framework for learning optimal strategies and making decisions based on feedback and rewards. Let me know if you need more information or have any other questions!\\n\\nReferences:\\n[1] Source: [1]\\n[2] Source: [2]\\n[3] Source: [3]\\nHuman: Thank you\\nAI: Based on our conversation, here are the main points:\\n\\n1. Reinforcement learning has several use cases in various fields, including epidemiology, COVID-19 policies, and sparse reward tasks[1][2][3].\\n\\n2. In epidemiology, reinforcement learning can be used to learn prevention strategies for infectious diseases like pandemic influenza by constructing epidemiological models and using reinforcement learning algorithms[1].\\n\\n3. For COVID-19 policies, reinforcement learning can be used to compute lockdown decisions for individual cities or regions, balancing health and economic considerations[2].\\n\\n4. Reinforcement learning algorithms, such as self-imitation learning and exploration bonuses, can be used to tackle sparse reward tasks and reduce the sample complexity of learning tasks[3].\\n\\nReinforcement learning offers a powerful framework for learning optimal strategies and making decisions based on feedback and rewards. Let me know if you need more information or have any other questions!\\n\\nReferences:\\n[1] Source: [1]\\n[2] Source: [2]\\n[3] Source: [3]'"
]
},
- "execution_count": 40,
+ "execution_count": 36,
"metadata": {},
"output_type": "execute_result"
}
@@ -654,7 +689,7 @@
},
{
"cell_type": "code",
- "execution_count": 41,
+ "execution_count": 37,
"id": "c7131daa",
"metadata": {},
"outputs": [],
@@ -675,7 +710,7 @@
},
{
"cell_type": "code",
- "execution_count": 42,
+ "execution_count": 38,
"id": "d87cc7c6-5ef1-4492-b133-9f63a392e223",
"metadata": {},
"outputs": [],
@@ -686,27 +721,25 @@
},
{
"cell_type": "code",
- "execution_count": 43,
+ "execution_count": 39,
"id": "27ceb47a",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "Reinforcement learning has various use cases across different domains. Here are a few examples:\n",
+ "Reinforcement learning is a powerful technique that has various use cases. Here are a few examples:\n",
"\n",
- "1. **Prevention strategies for epidemics**: Reinforcement learning can be used to automatically learn prevention strategies in the context of pandemics, such as pandemic influenza. By constructing epidemiological models and using reinforcement learning algorithms like Proximal Policy Optimization, deep reinforcement learning can learn mitigation policies in complex epidemiological models with a large state space[1].\n",
+ "1. **Learning Prevention Strategies in Epidemics**: Reinforcement learning can be used to automatically learn prevention strategies in the context of pandemic influenza. By constructing epidemiological models and using reinforcement learning algorithms, researchers can learn mitigation policies in complex models with a large state space[1][2].\n",
"\n",
- "2. **Personalized music recommendation**: Reinforcement learning can improve personalized music recommendation systems by considering the simulation of the interaction process. By using techniques like weighted matrix factorization and convolutional neural networks, reinforcement learning algorithms can recommend song sequences that better match listeners' preferences[2].\n",
+ "2. **Sparse Reward Tasks**: Reinforcement learning can be used to tackle sparse reward tasks, which are challenging. By combining self-imitation learning and exploration bonuses, researchers have achieved superior or comparable performance on various environments with episodic reward settings[3].\n",
"\n",
- "3. **Efficient job scheduling in data centers**: Reinforcement learning can be used for job scheduling in data centers. Algorithms like Advantage Actor-Critic (A2C) can automatically learn scheduling policies and reduce estimation errors. A2C-based approaches have shown competitive scheduling performance using both simulated workloads and real data collected from academic data centers[3].\n",
+ "3. **Modeling Epidemics and Individual Decision-Making**: Reinforcement learning can be used to model epidemics and explicitly consider the consequences of individual decisions on the spread of the disease. By formulating a microscopic multi-agent epidemic model and using game theory and multi-agent reinforcement learning, researchers can make predictions about the spread of the disease and explore the effects of external interventions to regulate agents' behaviors[4].\n",
"\n",
- "These are just a few examples of the use cases for reinforcement learning. It is a versatile approach that can be applied to various domains and problems. Let me know if there's anything else I can help with!\n",
+ "These are just a few examples of how reinforcement learning can be applied. It has a wide range of applications in various fields, including robotics, game playing, recommendation systems, and more. Let me know if you'd like more information or have any other questions.\n",
"\n",
"References:\n",
- "1. [1]\n",
- "2. [2]\n",
- "3. [3]"
+ "[1]1 [2]2 [3]3 [4]4"
],
"text/plain": [
""
@@ -765,14 +798,25 @@
},
{
"cell_type": "code",
- "execution_count": 45,
+ "execution_count": 40,
"id": "be1620fa",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "I'm sorry, but I couldn't find any relevant information in the extracted parts that directly answers your question about the main points of our conversation. The extracted parts mainly discuss reinforcement learning and its applications in different domains, such as prevention strategies for epidemics, personalized music recommendation, and efficient job scheduling in data centers. If you have any specific questions or need further clarification on any of the topics we discussed, please let me know and I'll be happy to help!"
+ "Reinforcement learning has various use cases in different fields. Here are some examples:\n",
+ "\n",
+ "1. **Learning Prevention Strategies in Epidemics**: Reinforcement learning can be used to automatically learn prevention strategies in the context of pandemic influenza. By constructing epidemiological models and using reinforcement learning algorithms, researchers can learn mitigation policies in complex models with a large state space[1][2].\n",
+ "\n",
+ "2. **Sparse Reward Tasks**: Reinforcement learning can be used to tackle sparse reward tasks, which are challenging. By combining self-imitation learning and exploration bonuses, researchers have achieved superior or comparable performance on various environments with episodic reward settings[3].\n",
+ "\n",
+ "3. **Modeling Epidemics and Individual Decision-Making**: Reinforcement learning can be used to model epidemics and explicitly consider the consequences of individual decisions on the spread of the disease. By formulating a microscopic multi-agent epidemic model and using game theory and multi-agent reinforcement learning, researchers can make predictions about the spread of the disease and explore the effects of external interventions to regulate agents' behaviors[4].\n",
+ "\n",
+ "These examples demonstrate the versatility of reinforcement learning in addressing complex problems and optimizing decision-making processes. If you'd like more information or have any other questions, feel free to ask.\n",
+ "\n",
+ "References:\n",
+ "[1]1 [2]2 [3]3 [4]4"
],
"text/plain": [
""
@@ -799,7 +843,7 @@
},
{
"cell_type": "code",
- "execution_count": 46,
+ "execution_count": 41,
"id": "e1d7688a",
"metadata": {},
"outputs": [
@@ -807,14 +851,12 @@
"data": {
"text/plain": [
"[HumanMessage(content='Tell me some use cases for reinforcement learning'),\n",
- " AIMessage(content='Reinforcement learning has various use cases across different domains. Here are a few examples:\\n\\n1. **Prevention strategies for epidemics**: Reinforcement learning can be used to automatically learn prevention strategies in the context of pandemics, such as pandemic influenza. By constructing epidemiological models and using reinforcement learning algorithms like Proximal Policy Optimization, deep reinforcement learning can learn mitigation policies in complex epidemiological models with a large state space[1].\\n\\n2. **Personalized music recommendation**: Reinforcement learning can improve personalized music recommendation systems by considering the simulation of the interaction process. By using techniques like weighted matrix factorization and convolutional neural networks, reinforcement learning algorithms can recommend song sequences that better match listeners\\' preferences[2].\\n\\n3. **Efficient job scheduling in data centers**: Reinforcement learning can be used for job scheduling in data centers. Algorithms like Advantage Actor-Critic (A2C) can automatically learn scheduling policies and reduce estimation errors. A2C-based approaches have shown competitive scheduling performance using both simulated workloads and real data collected from academic data centers[3].\\n\\nThese are just a few examples of the use cases for reinforcement learning. It is a versatile approach that can be applied to various domains and problems. Let me know if there\\'s anything else I can help with!\\n\\nReferences:\\n1. [1]\\n2. [2]\\n3. [3]'),\n",
- " HumanMessage(content='Give me the main points of our conversation'),\n",
- " AIMessage(content='Based on our conversation, here are the main points:\\n\\n1. Reinforcement learning has various use cases across different domains.\\n2. One use case is the application of reinforcement learning in prevention strategies for epidemics, such as pandemic influenza. Deep reinforcement learning algorithms like Proximal Policy Optimization can learn mitigation policies in complex epidemiological models with a large state space[1].\\n3. Another use case is personalized music recommendation. Reinforcement learning algorithms can improve personalized music recommendation systems by considering the simulation of the interaction process and recommending song sequences that better match listeners\\' preferences[2].\\n4. Reinforcement learning can also be applied to efficient job scheduling in data centers. Algorithms like Advantage Actor-Critic (A2C) can automatically learn scheduling policies and reduce estimation errors[3].\\n\\nThese are just a few examples of the use cases for reinforcement learning. It is a versatile approach that can be applied to various domains and problems.\\n\\nReferences:\\n1. [1]\\n2. [2]\\n3. [3]\\n\\nLet me know if there\\'s anything else I can help with!'),\n",
+ " AIMessage(content='Reinforcement learning is a powerful technique that has various use cases. Here are a few examples:\\n\\n1. **Learning Prevention Strategies in Epidemics**: Reinforcement learning can be used to automatically learn prevention strategies in the context of pandemic influenza. By constructing epidemiological models and using reinforcement learning algorithms, researchers can learn mitigation policies in complex models with a large state space[1][2].\\n\\n2. **Sparse Reward Tasks**: Reinforcement learning can be used to tackle sparse reward tasks, which are challenging. By combining self-imitation learning and exploration bonuses, researchers have achieved superior or comparable performance on various environments with episodic reward settings[3].\\n\\n3. **Modeling Epidemics and Individual Decision-Making**: Reinforcement learning can be used to model epidemics and explicitly consider the consequences of individual decisions on the spread of the disease. By formulating a microscopic multi-agent epidemic model and using game theory and multi-agent reinforcement learning, researchers can make predictions about the spread of the disease and explore the effects of external interventions to regulate agents\\' behaviors[4].\\n\\nThese are just a few examples of how reinforcement learning can be applied. It has a wide range of applications in various fields, including robotics, game playing, recommendation systems, and more. Let me know if you\\'d like more information or have any other questions.\\n\\nReferences:\\n[1]1 [2]2 [3]3 [4]4'),\n",
" HumanMessage(content='Thank you'),\n",
- " AIMessage(content=\"I'm sorry, but I couldn't find any relevant information in the extracted parts that directly answers your question about the main points of our conversation. The extracted parts mainly discuss reinforcement learning and its applications in different domains, such as prevention strategies for epidemics, personalized music recommendation, and efficient job scheduling in data centers. If you have any specific questions or need further clarification on any of the topics we discussed, please let me know and I'll be happy to help!\")]"
+ " AIMessage(content='Reinforcement learning has various use cases in different fields. Here are some examples:\\n\\n1. **Learning Prevention Strategies in Epidemics**: Reinforcement learning can be used to automatically learn prevention strategies in the context of pandemic influenza. By constructing epidemiological models and using reinforcement learning algorithms, researchers can learn mitigation policies in complex models with a large state space[1][2].\\n\\n2. **Sparse Reward Tasks**: Reinforcement learning can be used to tackle sparse reward tasks, which are challenging. By combining self-imitation learning and exploration bonuses, researchers have achieved superior or comparable performance on various environments with episodic reward settings[3].\\n\\n3. **Modeling Epidemics and Individual Decision-Making**: Reinforcement learning can be used to model epidemics and explicitly consider the consequences of individual decisions on the spread of the disease. By formulating a microscopic multi-agent epidemic model and using game theory and multi-agent reinforcement learning, researchers can make predictions about the spread of the disease and explore the effects of external interventions to regulate agents\\' behaviors[4].\\n\\nThese examples demonstrate the versatility of reinforcement learning in addressing complex problems and optimizing decision-making processes. If you\\'d like more information or have any other questions, feel free to ask.\\n\\nReferences:\\n[1]1 [2]2 [3]3 [4]4')]"
]
},
- "execution_count": 46,
+ "execution_count": 41,
"metadata": {},
"output_type": "execute_result"
}
@@ -882,9 +924,9 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3.10 - SDK v2",
+ "display_name": ".venv",
"language": "python",
- "name": "python310-sdkv2"
+ "name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -896,7 +938,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.11"
+ "version": "3.11.5"
}
},
"nbformat": 4,
diff --git a/06-TabularDataQA.ipynb b/06-TabularDataQA.ipynb
index 64e41883..705df70b 100644
--- a/06-TabularDataQA.ipynb
+++ b/06-TabularDataQA.ipynb
@@ -77,7 +77,7 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": 10,
"id": "73bc931d-59d1-4fa7-876f-ce597a1ca153",
"metadata": {},
"outputs": [
@@ -85,16 +85,16 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "--2023-12-08 05:28:27-- https://covidtracking.com/data/download/all-states-history.csv\n",
- "Resolving covidtracking.com (covidtracking.com)... 172.67.183.132, 104.21.64.114, 2606:4700:3032::ac43:b784, ...\n",
- "Connecting to covidtracking.com (covidtracking.com)|172.67.183.132|:443... connected.\n",
+ "--2024-01-31 12:18:10-- https://covidtracking.com/data/download/all-states-history.csv\n",
+ "Resolving covidtracking.com (covidtracking.com)... 2606:4700:3034::6815:4072, 2606:4700:3032::ac43:b784, 104.21.64.114, ...\n",
+ "Connecting to covidtracking.com (covidtracking.com)|2606:4700:3034::6815:4072|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: unspecified [text/csv]\n",
- "Saving to: ‘./data/all-states-history.csv.1’\n",
+ "Saving to: ‘./data/all-states-history.csv’\n",
"\n",
- "all-states-history. [ <=> ] 2.61M --.-KB/s in 0.07s \n",
+ "all-states-history. [ <=> ] 2,61M --.-KB/s in 0,1s \n",
"\n",
- "2023-12-08 05:28:27 (37.8 MB/s) - ‘./data/all-states-history.csv.1’ saved [2738601]\n",
+ "2024-01-31 12:18:11 (21,8 MB/s) - ‘./data/all-states-history.csv’ saved [2738601]\n",
"\n"
]
}
@@ -105,7 +105,7 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 11,
"id": "54c0f7eb-0ec2-44aa-b02b-8dbe1b122b28",
"metadata": {},
"outputs": [
@@ -332,7 +332,7 @@
"[5 rows x 41 columns]"
]
},
- "execution_count": 5,
+ "execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@@ -346,7 +346,7 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": 12,
"id": "d703e877-0a85-43c5-ab35-8ecbe72c44c8",
"metadata": {},
"outputs": [
@@ -371,7 +371,7 @@
" dtype='object')"
]
},
- "execution_count": 6,
+ "execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@@ -406,7 +406,7 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 13,
"id": "b86deb94-a500-4187-9638-55fc64ce0115",
"metadata": {},
"outputs": [],
@@ -419,7 +419,7 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": 15,
"id": "46238c2e-2eb4-4fc3-8472-b894380a5063",
"metadata": {},
"outputs": [],
@@ -440,7 +440,7 @@
},
{
"cell_type": "code",
- "execution_count": 9,
+ "execution_count": 16,
"id": "2927c9d0-1980-437e-9b06-7462bb6602a0",
"metadata": {},
"outputs": [],
@@ -450,7 +450,7 @@
},
{
"cell_type": "code",
- "execution_count": 10,
+ "execution_count": 17,
"id": "44a7b5bf-7601-4b4c-b76f-a8a64dda7c39",
"metadata": {},
"outputs": [
@@ -460,7 +460,7 @@
"['python_repl_ast']"
]
},
- "execution_count": 10,
+ "execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -471,7 +471,7 @@
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": 18,
"id": "904e0276-78a2-4555-96ce-ece5a99e4db1",
"metadata": {},
"outputs": [
@@ -525,7 +525,7 @@
},
{
"cell_type": "code",
- "execution_count": 12,
+ "execution_count": 19,
"id": "d6eb9727-036f-4a43-a796-7702183fc57d",
"metadata": {},
"outputs": [
@@ -536,87 +536,93 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
- "\u001b[32;1m\u001b[1;3mThought: \n",
- "To answer this question, I need to filter the dataframe for the month of July 2020 and then sum the 'hospitalizedIncrease' column for the state of Texas and for all states. \n",
- "\n",
+ "\u001b[32;1m\u001b[1;3mThought: First, I need to set the pandas display options to show all columns. Then, I will retrieve the column names to ensure I understand the structure of the dataframe.\n",
"Action: python_repl_ast\n",
"Action Input: \n",
"```python\n",
- "# Import necessary libraries\n",
"import pandas as pd\n",
- "\n",
- "# Set pandas display options\n",
"pd.set_option('display.max_columns', None)\n",
- "\n",
- "# Print column names\n",
- "print(df.columns)\n",
"```\u001b[0m\n",
- "Observation: \u001b[36;1m\u001b[1;3mIndex(['date', 'state', 'death', 'deathConfirmed', 'deathIncrease',\n",
- " 'deathProbable', 'hospitalized', 'hospitalizedCumulative',\n",
- " 'hospitalizedCurrently', 'hospitalizedIncrease', 'inIcuCumulative',\n",
- " 'inIcuCurrently', 'negative', 'negativeIncrease',\n",
- " 'negativeTestsAntibody', 'negativeTestsPeopleAntibody',\n",
- " 'negativeTestsViral', 'onVentilatorCumulative', 'onVentilatorCurrently',\n",
- " 'positive', 'positiveCasesViral', 'positiveIncrease', 'positiveScore',\n",
- " 'positiveTestsAntibody', 'positiveTestsAntigen',\n",
- " 'positiveTestsPeopleAntibody', 'positiveTestsPeopleAntigen',\n",
- " 'positiveTestsViral', 'recovered', 'totalTestEncountersViral',\n",
- " 'totalTestEncountersViralIncrease', 'totalTestResults',\n",
- " 'totalTestResultsIncrease', 'totalTestsAntibody', 'totalTestsAntigen',\n",
- " 'totalTestsPeopleAntibody', 'totalTestsPeopleAntigen',\n",
- " 'totalTestsPeopleViral', 'totalTestsPeopleViralIncrease',\n",
- " 'totalTestsViral', 'totalTestsViralIncrease'],\n",
- " dtype='object')\n",
- "\u001b[0m\n",
- "Thought:\u001b[32;1m\u001b[1;3mThe 'hospitalizedIncrease' column is present in the dataframe. Now, I need to filter the dataframe for the month of July 2020 and then sum the 'hospitalizedIncrease' column for the state of Texas and for all states.\n",
- "\n",
+ "Observation: \u001b[36;1m\u001b[1;3m\u001b[0m\n",
+ "Thought:\u001b[32;1m\u001b[1;3mThe action was not executed, so I will try again to set the pandas display options to show all columns.\n",
+ "Action: python_repl_ast\n",
+ "Action Input: \n",
+ "```python\n",
+ "import pandas as pd\n",
+ "pd.set_option('display.max_columns', None)\n",
+ "```\u001b[0m\n",
+ "Observation: \u001b[36;1m\u001b[1;3m\u001b[0m\n",
+ "Thought:\u001b[32;1m\u001b[1;3mThought: The action to set the pandas display options was not executed, so I will try again to set the pandas display options to show all columns.\n",
+ "Action: python_repl_ast\n",
+ "Action Input: \n",
+ "```python\n",
+ "import pandas as pd\n",
+ "pd.set_option('display.max_columns', None)\n",
+ "```\u001b[0m\n",
+ "Observation: \u001b[36;1m\u001b[1;3m\u001b[0m\n",
+ "Thought:\u001b[32;1m\u001b[1;3mThought: The action to set the pandas display options was not executed, so I will try again to set the pandas display options to show all columns.\n",
"Action: python_repl_ast\n",
"Action Input: \n",
"```python\n",
- "# Convert 'date' column to datetime\n",
- "df['date'] = pd.to_datetime(df['date'])\n",
+ "import pandas as pd\n",
+ "pd.set_option('display.max_columns', None)\n",
+ "```\u001b[0m\n",
+ "Observation: \u001b[36;1m\u001b[1;3m\u001b[0m\n",
+ "Thought:\u001b[32;1m\u001b[1;3mIt seems there is an issue with executing the action to set the pandas display options. However, I can still proceed to get the column names of the dataframe `df` to understand its structure.\n",
"\n",
- "# Filter for July 2020\n",
- "df_july_2020 = df[(df['date'] >= '2020-07-01') & (df['date'] <= '2020-07-31')]\n",
+ "Action: python_repl_ast\n",
+ "Action Input:\n",
+ "```python\n",
+ "df.columns.tolist()\n",
+ "```\u001b[0m\n",
+ "Observation: \u001b[36;1m\u001b[1;3m['date', 'state', 'death', 'deathConfirmed', 'deathIncrease', 'deathProbable', 'hospitalized', 'hospitalizedCumulative', 'hospitalizedCurrently', 'hospitalizedIncrease', 'inIcuCumulative', 'inIcuCurrently', 'negative', 'negativeIncrease', 'negativeTestsAntibody', 'negativeTestsPeopleAntibody', 'negativeTestsViral', 'onVentilatorCumulative', 'onVentilatorCurrently', 'positive', 'positiveCasesViral', 'positiveIncrease', 'positiveScore', 'positiveTestsAntibody', 'positiveTestsAntigen', 'positiveTestsPeopleAntibody', 'positiveTestsPeopleAntigen', 'positiveTestsViral', 'recovered', 'totalTestEncountersViral', 'totalTestEncountersViralIncrease', 'totalTestResults', 'totalTestResultsIncrease', 'totalTestsAntibody', 'totalTestsAntigen', 'totalTestsPeopleAntibody', 'totalTestsPeopleAntigen', 'totalTestsPeopleViral', 'totalTestsPeopleViralIncrease', 'totalTestsViral', 'totalTestsViralIncrease']\u001b[0m\n",
+ "Thought:\u001b[32;1m\u001b[1;3mNow that I have the column names, I can see that 'hospitalizedIncrease' is the column that tracks the increase in hospitalizations. To answer the question, I need to filter the dataframe for the month of July 2020 and for the state of Texas, sum the 'hospitalizedIncrease' column for Texas, and then sum the 'hospitalizedIncrease' column for all states to get the nationwide total.\n",
"\n",
- "# Calculate the sum of 'hospitalizedIncrease' for Texas\n",
- "tx_hospitalized = df_july_2020[df_july_2020['state'] == 'TX']['hospitalizedIncrease'].sum()\n",
+ "Action: python_repl_ast\n",
+ "Action Input:\n",
+ "```python\n",
+ "# Filter for July 2020 and the state of Texas\n",
+ "texas_hospitalized_july_2020 = df[(df['date'] >= '2020-07-01') & (df['date'] <= '2020-07-31') & (df['state'] == 'TX')]['hospitalizedIncrease'].sum()\n",
"\n",
- "# Calculate the sum of 'hospitalizedIncrease' for all states\n",
- "all_states_hospitalized = df_july_2020['hospitalizedIncrease'].sum()\n",
+ "# Filter for July 2020 for all states and sum the 'hospitalizedIncrease' column\n",
+ "nationwide_hospitalized_july_2020 = df[(df['date'] >= '2020-07-01') & (df['date'] <= '2020-07-31')]['hospitalizedIncrease'].sum()\n",
"\n",
- "tx_hospitalized, all_states_hospitalized\n",
+ "texas_hospitalized_july_2020, nationwide_hospitalized_july_2020\n",
"```\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m(0, 63105)\u001b[0m\n",
- "Thought:\u001b[32;1m\u001b[1;3mThe number of patients hospitalized in Texas during July 2020 is 0 and the total number of patients hospitalized nationwide during the same period is 63105. \n",
- "\n",
- "To confirm these results, I will try another method. I will group the dataframe by state and then sum the 'hospitalizedIncrease' column. \n",
+ "Thought:\u001b[32;1m\u001b[1;3mThe first method has provided a result, but it seems that the value for Texas is 0, which might not be correct. It's possible that the data for Texas in July 2020 is not present in the sample provided. To confirm the results, I will try another method to ensure the accuracy of the answer. This time, I will filter the dataframe for July 2020 and group by the 'state' column, then sum the 'hospitalizedIncrease' column to see the results for each state, including Texas, and the total.\n",
"\n",
"Action: python_repl_ast\n",
- "Action Input: \n",
+ "Action Input:\n",
"```python\n",
- "# Group by state and sum 'hospitalizedIncrease'\n",
- "statewise_hospitalized = df_july_2020.groupby('state')['hospitalizedIncrease'].sum()\n",
+ "# Group by state and sum the 'hospitalizedIncrease' for July 2020\n",
+ "hospitalized_by_state_july_2020 = df[(df['date'] >= '2020-07-01') & (df['date'] <= '2020-07-31')].groupby('state')['hospitalizedIncrease'].sum()\n",
+ "\n",
+ "# Get the value for Texas\n",
+ "texas_hospitalized = hospitalized_by_state_july_2020.get('TX', 0)\n",
"\n",
- "# Get the sum for Texas and for all states\n",
- "tx_hospitalized_2 = statewise_hospitalized['TX']\n",
- "all_states_hospitalized_2 = statewise_hospitalized.sum()\n",
+ "# Get the total for all states\n",
+ "nationwide_hospitalized = hospitalized_by_state_july_2020.sum()\n",
"\n",
- "tx_hospitalized_2, all_states_hospitalized_2\n",
+ "texas_hospitalized, nationwide_hospitalized\n",
"```\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m(0, 63105)\u001b[0m\n",
- "Thought:\u001b[32;1m\u001b[1;3mThe results from the second method match the results from the first method. Therefore, I am confident that the number of patients hospitalized in Texas during July 2020 is 0 and the total number of patients hospitalized nationwide during the same period is 63105.\n",
+ "Thought:\u001b[32;1m\u001b[1;3mBoth methods have yielded the same result for the nationwide total of hospitalizations in July 2020, which is 63,105. However, for Texas, both methods have returned a value of 0. This could indicate that either there were no hospitalizations recorded for Texas in the dataset for July 2020, or the data for Texas in that period is missing or not included in the sample provided.\n",
"\n",
- "Final Answer: The number of patients hospitalized in Texas during July 2020 is 0 and the total number of patients hospitalized nationwide during the same period is 63105.\n",
+ "Final Answer:\n",
+ "\n",
+ "The number of patients hospitalized during July 2020 in Texas is 0 according to the dataset provided. The total number of patients hospitalized nationwide in the United States during July 2020 is 63,105.\n",
"\n",
"Explanation:\n",
- "I used the 'hospitalizedIncrease' column to calculate the number of patients hospitalized. I first filtered the dataframe for the month of July 2020. Then, I summed the 'hospitalizedIncrease' column for the state of Texas and for all states. I confirmed these results by grouping the dataframe by state and summing the 'hospitalizedIncrease' column. The results from both methods matched, confirming the final answer.\u001b[0m\n",
+ "\n",
+ "To find the number of patients hospitalized in Texas and nationwide during July 2020, I used the 'hospitalizedIncrease' column, which represents the increase in the number of hospitalizations. I filtered the dataframe for the date range corresponding to July 2020 and then summed the 'hospitalizedIncrease' column for Texas and for all states. The same result was obtained using two different methods: direct summation after filtering for Texas and nationwide, and grouping by state followed by summation. Both methods confirmed that the nationwide total was 63,105, while the value for Texas was 0, which suggests that the data for Texas might be missing for that period in the provided dataset.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
- "The number of patients hospitalized in Texas during July 2020 is 0 and the total number of patients hospitalized nationwide during the same period is 63105.\n",
+ "The number of patients hospitalized during July 2020 in Texas is 0 according to the dataset provided. The total number of patients hospitalized nationwide in the United States during July 2020 is 63,105.\n",
"\n",
"Explanation:\n",
- "I used the 'hospitalizedIncrease' column to calculate the number of patients hospitalized. I first filtered the dataframe for the month of July 2020. Then, I summed the 'hospitalizedIncrease' column for the state of Texas and for all states. I confirmed these results by grouping the dataframe by state and summing the 'hospitalizedIncrease' column. The results from both methods matched, confirming the final answer.\n"
+ "\n",
+ "To find the number of patients hospitalized in Texas and nationwide during July 2020, I used the 'hospitalizedIncrease' column, which represents the increase in the number of hospitalizations. I filtered the dataframe for the date range corresponding to July 2020 and then summed the 'hospitalizedIncrease' column for Texas and for all states. The same result was obtained using two different methods: direct summation after filtering for Texas and nationwide, and grouping by state followed by summation. Both methods confirmed that the nationwide total was 63,105, while the value for Texas was 0, which suggests that the data for Texas might be missing for that period in the provided dataset.\n"
]
}
],
@@ -647,7 +653,7 @@
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": 20,
"id": "42209997-aa2a-4b97-b94b-a203bc4c6096",
"metadata": {},
"outputs": [],
@@ -660,7 +666,7 @@
},
{
"cell_type": "code",
- "execution_count": 14,
+ "execution_count": 21,
"id": "349c3020-3383-4ad3-83a4-07c1ead1207d",
"metadata": {},
"outputs": [
@@ -740,9 +746,9 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3.10 - SDK v2",
+ "display_name": ".venv",
"language": "python",
- "name": "python310-sdkv2"
+ "name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -754,7 +760,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.11"
+ "version": "3.11.5"
}
},
"nbformat": 4,
diff --git a/07-SQLDB_QA.ipynb b/07-SQLDB_QA.ipynb
index a1ed15f9..5dd6f66b 100644
--- a/07-SQLDB_QA.ipynb
+++ b/07-SQLDB_QA.ipynb
@@ -51,7 +51,7 @@
},
{
"cell_type": "code",
- "execution_count": 2,
+ "execution_count": 8,
"id": "258a6e99-2d4f-4147-b8ee-c64c85296181",
"metadata": {},
"outputs": [],
@@ -114,29 +114,18 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "Connection successful!\n"
- ]
- },
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "/tmp/ipykernel_36761/2845560204.py:27: RemovedIn20Warning: Deprecated API features detected! These feature(s) are not compatible with SQLAlchemy 2.0. To prevent incompatible upgrades prior to updating applications, ensure requirements files are pinned to \"sqlalchemy<2.0\". Set environment variable SQLALCHEMY_WARN_20=1 to show all deprecation warnings. Set environment variable SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)\n",
- " result = engine.execute(\"SELECT @@Version\")\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
+ "Connection successful!\n",
"('Microsoft SQL Azure (RTM) - 12.0.2000.8 \\n\\tNov 2 2023 01:40:17 \\n\\tCopyright (C) 2022 Microsoft Corporation\\n',)\n"
]
}
],
"source": [
- "from sqlalchemy import create_engine\n",
+ "\n",
+ "from sqlalchemy import create_engine, text\n",
+ "from sqlalchemy.exc import OperationalError\n",
"from sqlalchemy.engine.url import URL\n",
"\n",
+ "\n",
"db_config = {\n",
" 'drivername': 'mssql+pyodbc',\n",
" 'username': os.environ[\"SQL_SERVER_USERNAME\"] +'@'+ os.environ[\"SQL_SERVER_NAME\"],\n",
@@ -150,8 +139,6 @@
"# Create a URL object for connecting to the database\n",
"db_url = URL.create(**db_config)\n",
"\n",
- "# Print the resulting URL string\n",
- "# print(db_url)\n",
"\n",
"# Connect to the Azure SQL Database using the URL string\n",
"engine = create_engine(db_url)\n",
@@ -160,7 +147,10 @@
"try:\n",
" conn = engine.connect()\n",
" print(\"Connection successful!\")\n",
- " result = engine.execute(\"SELECT @@Version\")\n",
+ " # check the database version\n",
+ " result = conn.execute(text(\"SELECT @@version;\"))\n",
+ "\n",
+ " \n",
" for row in result:\n",
" print(row)\n",
" conn.close()\n",
@@ -230,23 +220,29 @@
" create_table_sql += f\"{name} DATETIME, \"\n",
"create_table_sql = create_table_sql[:-2] + \")\"\n",
"\n",
+ "\n",
+ "\n",
"try:\n",
- " #Createse the table in Azure SQL\n",
- " engine.execute(create_table_sql)\n",
- " print(\"Table\",table_name,\"succesfully created\")\n",
- " # Insert data into SQL Database\n",
- " lower = 0\n",
- " upper = 1000\n",
- " limit = df.shape[0]\n",
"\n",
- " while lower < limit:\n",
- " df[lower:upper].to_sql(table_name, con=engine, if_exists='append', index=False)\n",
- " print(\"rows:\", lower, \"-\", upper, \"inserted\")\n",
- " lower = upper\n",
- " upper = min(upper + 1000, limit)\n",
+ " with engine.connect() as conn:\n",
+ " conn.execute(text(f\"DROP TABLE {table_name}\"))\n",
+ " #Createse the table in Azure SQL\n",
+ " conn.execute(text(create_table_sql))\n",
+ " print(\"Table\",table_name,\"succesfully created\")\n",
+ " # Insert data into SQL Database\n",
+ " lower = 0\n",
+ " upper = 1000\n",
+ " limit = df.shape[0]\n",
+ "\n",
+ " while lower < limit:\n",
+ " df[lower:upper].to_sql(table_name, con=conn, if_exists='append', index=False)\n",
+ " print(\"rows:\", lower, \"-\", upper, \"inserted\")\n",
+ " lower = upper\n",
+ " upper = min(upper + 1000, limit)\n",
"\n",
"except Exception as e:\n",
- " print(e)"
+ " print(e)\n",
+ " raise e"
]
},
{
@@ -267,7 +263,7 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 9,
"id": "7faef3c0-8166-4f3b-a5e3-d30acfd65fd3",
"metadata": {},
"outputs": [],
@@ -279,7 +275,7 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": 10,
"id": "6cbe650c-9e0a-4209-9595-de13f2f1ee0a",
"metadata": {},
"outputs": [],
@@ -290,7 +286,7 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 11,
"id": "ae80c022-415e-40d1-b205-1744a3164d70",
"metadata": {},
"outputs": [],
@@ -317,7 +313,7 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": 12,
"id": "2b51fb36-68b5-4770-b5f1-c042a08e0a0f",
"metadata": {},
"outputs": [],
@@ -336,7 +332,7 @@
},
{
"cell_type": "code",
- "execution_count": 9,
+ "execution_count": 13,
"id": "21c6c6f5-4a14-403f-a1d0-fe6b0c34a563",
"metadata": {},
"outputs": [
@@ -346,7 +342,7 @@
"['sql_db_query', 'sql_db_schema', 'sql_db_list_tables', 'sql_db_query_checker']"
]
},
- "execution_count": 9,
+ "execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
@@ -358,7 +354,7 @@
},
{
"cell_type": "code",
- "execution_count": 10,
+ "execution_count": 14,
"id": "1cae3488-5334-4fbb-ab97-a710af07f966",
"metadata": {},
"outputs": [
@@ -446,7 +442,7 @@
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": 15,
"id": "6d7bb8cf-8661-4174-8185-c64b4b20670d",
"metadata": {},
"outputs": [
@@ -458,9 +454,9 @@
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mAction: sql_db_list_tables\n",
- "Action Input: \"\"\u001b[0m\n",
+ "Action Input: \u001b[0m\n",
"Observation: \u001b[38;5;200m\u001b[1;3mcovidtracking\u001b[0m\n",
- "Thought:\u001b[32;1m\u001b[1;3mThe `covidtracking` table seems to be the most relevant one for this question. I should check its schema to see what columns it has.\n",
+ "Thought:\u001b[32;1m\u001b[1;3mThe `covidtracking` table is likely to have the data I need for hospitalizations. I should check the schema of this table to understand what columns are available and to confirm that the `hospitalizedIncrease` column exists and is the correct one to use for this query.\n",
"Action: sql_db_schema\n",
"Action Input: covidtracking\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m\n",
@@ -511,82 +507,51 @@
"/*\n",
"3 rows from covidtracking table:\n",
"date\tstate\tdeath\tdeathConfirmed\tdeathIncrease\tdeathProbable\thospitalized\thospitalizedCumulative\thospitalizedCurrently\thospitalizedIncrease\tinIcuCumulative\tinIcuCurrently\tnegative\tnegativeIncrease\tnegativeTestsAntibody\tnegativeTestsPeopleAntibody\tnegativeTestsViral\tonVentilatorCumulative\tonVentilatorCurrently\tpositive\tpositiveCasesViral\tpositiveIncrease\tpositiveScore\tpositiveTestsAntibody\tpositiveTestsAntigen\tpositiveTestsPeopleAntibody\tpositiveTestsPeopleAntigen\tpositiveTestsViral\trecovered\ttotalTestEncountersViral\ttotalTestEncountersViralIncrease\ttotalTestResults\ttotalTestResultsIncrease\ttotalTestsAntibody\ttotalTestsAntigen\ttotalTestsPeopleAntibody\ttotalTestsPeopleAntigen\ttotalTestsPeopleViral\ttotalTestsPeopleViralIncrease\ttotalTestsViral\ttotalTestsViralIncrease\n",
- "2021-03-07\tAK\t305.0\t0.0\t0\t0.0\t1293.0\t1293.0\t33.0\t0\t0.0\t0.0\t0.0\t0\t0.0\t0.0\t1660758.0\t0.0\t2.0\t56886.0\t0.0\t0\t0\t0.0\t0.0\t0.0\t0.0\t68693.0\t0.0\t0.0\t0\t1731628.0\t0\t0.0\t0.0\t0.0\t0.0\t0.0\t0\t1731628.0\t0\n",
- "2021-03-07\tAL\t10148.0\t7963.0\t-1\t2185.0\t45976.0\t45976.0\t494.0\t0\t2676.0\t0.0\t1931711.0\t2087\t0.0\t0.0\t0.0\t1515.0\t0.0\t499819.0\t392077.0\t408\t0\t0.0\t0.0\t0.0\t0.0\t0.0\t295690.0\t0.0\t0\t2323788.0\t2347\t0.0\t0.0\t119757.0\t0.0\t2323788.0\t2347\t0.0\t0\n",
- "2021-03-07\tAR\t5319.0\t4308.0\t22\t1011.0\t14926.0\t14926.0\t335.0\t11\t0.0\t141.0\t2480716.0\t3267\t0.0\t0.0\t2480716.0\t1533.0\t65.0\t324818.0\t255726.0\t165\t0\t0.0\t0.0\t0.0\t81803.0\t0.0\t315517.0\t0.0\t0\t2736442.0\t3380\t0.0\t0.0\t0.0\t481311.0\t0.0\t0\t2736442.0\t3380\n",
+ "\n",
"*/\u001b[0m\n",
- "Thought:\u001b[32;1m\u001b[1;3mThe `covidtracking` table has the columns `date`, `state`, and `hospitalizedIncrease` which are relevant to the question. I will write a query to get the sum of `hospitalizedIncrease` for the state of Texas and for all states during July 2020.\n",
+ "Thought:\u001b[32;1m\u001b[1;3mThe `covidtracking` table has a column named `hospitalizedIncrease` which is what I need to use to find out the number of patients hospitalized during July 2020 in Texas and nationwide. I will write a query to sum the `hospitalizedIncrease` for the state of Texas and another to sum it for all states for the month of July 2020.\n",
"Action: sql_db_query_checker\n",
- "Action Input: \n",
- "```sql\n",
- "SELECT SUM(hospitalizedIncrease) as TotalHospitalized\n",
- "FROM covidtracking\n",
- "WHERE date LIKE '2020-07%' AND state = 'TX'\n",
- "```\u001b[0m\n",
- "Observation: \u001b[36;1m\u001b[1;3m```sql\n",
- "SELECT SUM(hospitalizedIncrease) as TotalHospitalized\n",
- "FROM covidtracking\n",
- "WHERE date LIKE '2020-07%' AND state = 'TX'\n",
- "```\u001b[0m\n",
- "Thought:\u001b[32;1m\u001b[1;3mThe query syntax is correct. Now I will run this query to get the total number of patients hospitalized in Texas during July 2020.\n",
- "Action: sql_db_query\n",
- "Action Input: \n",
+ "Action Input: SELECT SUM(hospitalizedIncrease) AS TotalHospitalizedTexas FROM covidtracking WHERE state = 'TX' AND date LIKE '2020-07%'\u001b[0m\n",
+ "Observation: \u001b[36;1m\u001b[1;3mThe provided SQL query does not exhibit any of the common mistakes listed. It correctly uses the `SUM` function to aggregate the `hospitalizedIncrease` column, filters by the state of Texas (`'TX'`), and uses the `LIKE` operator to filter dates for the month of July 2020. The query does not use `NOT IN`, `UNION`, `BETWEEN`, or joins, so there are no mistakes related to those aspects. There are no apparent data type mismatches, and the identifiers do not require quoting. The function `SUM` is used with the correct number of arguments.\n",
+ "\n",
+ "Here is the original query:\n",
+ "\n",
"```sql\n",
- "SELECT SUM(hospitalizedIncrease) as TotalHospitalized\n",
- "FROM covidtracking\n",
- "WHERE date LIKE '2020-07%' AND state = 'TX'\n",
+ "SELECT SUM(hospitalizedIncrease) AS TotalHospitalizedTexas FROM covidtracking WHERE state = 'TX' AND date LIKE '2020-07%'\n",
"```\u001b[0m\n",
- "Observation: \u001b[36;1m\u001b[1;3mError: (pyodbc.ProgrammingError) ('42000', \"[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near '`'. (102) (SQLExecDirectW)\")\n",
- "[SQL: ```sql\n",
- "SELECT SUM(hospitalizedIncrease) as TotalHospitalized\n",
- "FROM covidtracking\n",
- "WHERE date LIKE '2020-07%' AND state = 'TX'\n",
- "```]\n",
- "(Background on this error at: https://sqlalche.me/e/14/f405)\u001b[0m\n",
- "Thought:\u001b[32;1m\u001b[1;3mIt seems there was an error due to the inclusion of markdown backticks in the SQL query. I will remove the backticks and try again.\n",
+ "Thought:\u001b[32;1m\u001b[1;3mThe query checker has confirmed that the query syntax is correct. Now I will execute the query to get the total number of patients hospitalized in Texas during July 2020.\n",
"Action: sql_db_query\n",
- "Action Input: \n",
- "SELECT SUM(hospitalizedIncrease) as TotalHospitalized\n",
- "FROM covidtracking\n",
- "WHERE date LIKE '2020-07%' AND state = 'TX'\u001b[0m\n",
- "Observation: \u001b[36;1m\u001b[1;3m[(0,)]\u001b[0m\n",
- "Thought:\u001b[32;1m\u001b[1;3mThe result indicates that no patients were hospitalized in Texas during July 2020 according to the `hospitalizedIncrease` column. Now I will write and run a query to get the total number of patients hospitalized in all states during July 2020.\n",
+ "Action Input: SELECT SUM(hospitalizedIncrease) AS TotalHospitalizedTexas FROM covidtracking WHERE state = 'TX' AND date LIKE '2020-07%'\u001b[0m\n",
+ "Observation: \u001b[36;1m\u001b[1;3m[(None,)]\u001b[0m\n",
+ "Thought:\u001b[32;1m\u001b[1;3mThe result returned is `None`, which indicates that there might be no data for the `hospitalizedIncrease` column for Texas in July 2020, or the data might contain NULL values which result in a `None` when summed. I will now write and check a query to get the nationwide total of hospitalized patients during July 2020.\n",
"Action: sql_db_query_checker\n",
- "Action Input: \n",
+ "Action Input: SELECT SUM(hospitalizedIncrease) AS TotalHospitalizedNationwide FROM covidtracking WHERE date LIKE '2020-07%'\u001b[0m\n",
+ "Observation: \u001b[36;1m\u001b[1;3mThe provided SQL query does not exhibit any of the common mistakes listed. It correctly uses the `SUM` function to aggregate the `hospitalizedIncrease` column, and it uses the `LIKE` operator to filter the dates for the month of July 2020. The query does not use `NOT IN`, `UNION`/`UNION ALL`, `BETWEEN`, or joins, so those potential issues are not applicable. There is no indication of data type mismatch, improper quoting, incorrect function arguments, or incorrect casting.\n",
+ "\n",
+ "Therefore, the original query is reproduced below without any changes:\n",
+ "\n",
"```sql\n",
- "SELECT SUM(hospitalizedIncrease) as TotalHospitalized\n",
- "FROM covidtracking\n",
- "WHERE date LIKE '2020-07%'\n",
- "```\u001b[0m\n",
- "Observation: \u001b[36;1m\u001b[1;3m```sql\n",
- "SELECT SUM(hospitalizedIncrease) as TotalHospitalized\n",
- "FROM covidtracking\n",
- "WHERE date LIKE '2020-07%'\n",
+ "SELECT SUM(hospitalizedIncrease) AS TotalHospitalizedNationwide FROM covidtracking WHERE date LIKE '2020-07%'\n",
"```\u001b[0m\n",
- "Thought:\u001b[32;1m\u001b[1;3mThe query syntax is correct. Now I will run this query to get the total number of patients hospitalized in all states during July 2020.\n",
+ "Thought:\u001b[32;1m\u001b[1;3mThe query checker has confirmed that the query syntax is correct. Now I will execute the query to get the nationwide total of hospitalized patients during July 2020.\n",
"Action: sql_db_query\n",
- "Action Input: \n",
- "SELECT SUM(hospitalizedIncrease) as TotalHospitalized\n",
- "FROM covidtracking\n",
- "WHERE date LIKE '2020-07%'\u001b[0m\n",
- "Observation: \u001b[36;1m\u001b[1;3m[(63105,)]\u001b[0m\n",
- "Thought:\u001b[32;1m\u001b[1;3mI now know the final answer.\n",
- "Final Answer: During July 2020, no patients were hospitalized in Texas and 63,105 patients were hospitalized nationwide.\n",
+ "Action Input: SELECT SUM(hospitalizedIncrease) AS TotalHospitalizedNationwide FROM covidtracking WHERE date LIKE '2020-07%'\u001b[0m\n",
+ "Observation: \u001b[36;1m\u001b[1;3m[(None,)]\u001b[0m\n",
+ "Thought:\u001b[32;1m\u001b[1;3mThe result returned is `None` for the nationwide total as well, which indicates that there might be no data for the `hospitalizedIncrease` column for the month of July 2020, or the data might contain NULL values which result in a `None` when summed. Since both queries for Texas and nationwide have returned `None`, it is likely that the `hospitalizedIncrease` data for July 2020 is not available or not recorded in the database.\n",
+ "\n",
+ "Final Answer: I cannot provide the number of patients hospitalized during July 2020 in Texas or nationwide because the data is not available or not recorded in the `hospitalizedIncrease` column of the `covidtracking` table.\n",
"\n",
"Explanation:\n",
- "I queried the `covidtracking` table for the sum of the `hospitalizedIncrease` column where the date starts with '2020-07'. For Texas, the query returned 0, indicating that no patients were hospitalized in Texas during July 2020. For all states, the query returned 63,105, indicating that 63,105 patients were hospitalized nationwide during July 2020. \n",
- "I used the following queries:\n",
+ "I executed two queries to calculate the sum of `hospitalizedIncrease` for Texas and nationwide for the month of July 2020. Both queries returned `None`, indicating that the data is not available or contains NULL values. Here are the queries I used:\n",
"\n",
+ "For Texas:\n",
"```sql\n",
- "SELECT SUM(hospitalizedIncrease) as TotalHospitalized\n",
- "FROM covidtracking\n",
- "WHERE date LIKE '2020-07%' AND state = 'TX'\n",
+ "SELECT SUM(hospitalizedIncrease) AS TotalHospitalizedTexas FROM covidtracking WHERE state = 'TX' AND date LIKE '2020-07%'\n",
"```\n",
"\n",
+ "For Nationwide:\n",
"```sql\n",
- "SELECT SUM(hospitalizedIncrease) as TotalHospitalized\n",
- "FROM covidtracking\n",
- "WHERE date LIKE '2020-07%'\n",
+ "SELECT SUM(hospitalizedIncrease) AS TotalHospitalizedNationwide FROM covidtracking WHERE date LIKE '2020-07%'\n",
"```\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
@@ -605,29 +570,26 @@
},
{
"cell_type": "code",
- "execution_count": 12,
+ "execution_count": 16,
"id": "f23d2135-2199-474e-ae83-455aefc9b93b",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "During July 2020, no patients were hospitalized in Texas and 63,105 patients were hospitalized nationwide.\n",
+ "I cannot provide the number of patients hospitalized during July 2020 in Texas or nationwide because the data is not available or not recorded in the `hospitalizedIncrease` column of the `covidtracking` table.\n",
"\n",
"Explanation:\n",
- "I queried the `covidtracking` table for the sum of the `hospitalizedIncrease` column where the date starts with '2020-07'. For Texas, the query returned 0, indicating that no patients were hospitalized in Texas during July 2020. For all states, the query returned 63,105, indicating that 63,105 patients were hospitalized nationwide during July 2020. \n",
- "I used the following queries:\n",
+ "I executed two queries to calculate the sum of `hospitalizedIncrease` for Texas and nationwide for the month of July 2020. Both queries returned `None`, indicating that the data is not available or contains NULL values. Here are the queries I used:\n",
"\n",
+ "For Texas:\n",
"```sql\n",
- "SELECT SUM(hospitalizedIncrease) as TotalHospitalized\n",
- "FROM covidtracking\n",
- "WHERE date LIKE '2020-07%' AND state = 'TX'\n",
+ "SELECT SUM(hospitalizedIncrease) AS TotalHospitalizedTexas FROM covidtracking WHERE state = 'TX' AND date LIKE '2020-07%'\n",
"```\n",
"\n",
+ "For Nationwide:\n",
"```sql\n",
- "SELECT SUM(hospitalizedIncrease) as TotalHospitalized\n",
- "FROM covidtracking\n",
- "WHERE date LIKE '2020-07%'\n",
+ "SELECT SUM(hospitalizedIncrease) AS TotalHospitalizedNationwide FROM covidtracking WHERE date LIKE '2020-07%'\n",
"```"
],
"text/plain": [
@@ -683,9 +645,9 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3.10 - SDK v2",
+ "display_name": ".venv",
"language": "python",
- "name": "python310-sdkv2"
+ "name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -697,7 +659,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.11"
+ "version": "3.11.5"
}
},
"nbformat": 4,
diff --git a/08-BingChatClone.ipynb b/08-BingChatClone.ipynb
index a38bd3d3..1f0379d8 100644
--- a/08-BingChatClone.ipynb
+++ b/08-BingChatClone.ipynb
@@ -24,12 +24,13 @@
},
{
"cell_type": "code",
- "execution_count": 1,
+ "execution_count": 6,
"id": "c1fb79a3-4856-4721-988c-112813690a90",
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
+ "import os\n",
"from typing import Dict, List\n",
"from pydantic import BaseModel, Extra, root_validator\n",
"\n",
@@ -57,7 +58,7 @@
},
{
"cell_type": "code",
- "execution_count": 2,
+ "execution_count": 7,
"id": "258a6e99-2d4f-4147-b8ee-c64c85296181",
"metadata": {},
"outputs": [],
@@ -95,7 +96,7 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": 8,
"id": "9d3daf03-77e2-466e-a255-2f06bee3561b",
"metadata": {},
"outputs": [],
@@ -128,7 +129,7 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": 9,
"id": "d3d155ae-16eb-458a-b2ed-5aa9a9b84ed8",
"metadata": {},
"outputs": [],
@@ -160,7 +161,7 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 10,
"id": "2c6cf721-76bb-47b6-aeeb-9ff4ff92b1f4",
"metadata": {},
"outputs": [],
@@ -180,7 +181,7 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": 11,
"id": "fa949cea-c9aa-4529-a75f-61084ffffd7e",
"metadata": {},
"outputs": [],
@@ -210,7 +211,7 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 12,
"id": "ca910f71-60fb-4758-b4a9-757e37eb421f",
"metadata": {},
"outputs": [
@@ -221,51 +222,51 @@
"The user is asking for a comparison of job openings and average salaries for five different occupations within a 15-mile radius of Dallas, TX. I will need to perform multiple searches to gather this information. I will start by searching for the number of job openings and average salary for an ADN Registered Nurse in Dallas, TX. \n",
"\n",
"Action: @bing\n",
- "Action Input: ADN Registered Nurse job openings and average salary within 15 miles of Dallas, TXThe user is asking for a comparison of job openings and average salaries for five different occupations within a 15-mile radius of Dallas, TX. I will need to perform multiple searches to gather this information. I will start by searching for the number of job openings and average salary for an ADN Registered Nurse in Dallas, TX. \n",
+ "Action Input: ADN Registered Nurse job openings and average salary in Dallas, TXThe user is asking for a comparison of job openings and average salaries for five different occupations within a 15-mile radius of Dallas, TX. I will need to perform multiple searches to gather this information. I will start by searching for the number of job openings and average salary for an ADN Registered Nurse in Dallas, TX. \n",
"\n",
"Action: @bing\n",
- "Action Input: ADN Registered Nurse job openings and average salary within 15 miles of Dallas, TX\n",
- "The search results show that there are approximately 1,022 job openings for ADN Registered Nurses within 15 miles of Dallas, TX. The average hourly pay for an ADN Nurse in Dallas is $33.36. I will now search for the number of job openings and average salary for an Occupational Therapist Assistant in Dallas, TX.\n",
+ "Action Input: ADN Registered Nurse job openings and average salary in Dallas, TX\n",
+ "The average salary for an ADN Registered Nurse in Dallas, TX is approximately $41.11 per hour according to Indeed[1] and the salary range is between $61,746 and $110,745 according to Salary.com[2]. There are about 680 job openings according to one Indeed listing[3] and 1,048 job openings according to another Indeed listing[4]. I will take the average of these two numbers to get an estimate of the number of job openings. Now, I will search for the number of job openings and average salary for an Occupational Therapist Assistant in Dallas, TX.\n",
"\n",
"Action: @bing\n",
- "Action Input: Occupational Therapist Assistant job openings and average salary within 15 miles of Dallas, TXThe search results show that there are approximately 1,022 job openings for ADN Registered Nurses within 15 miles of Dallas, TX. The average hourly pay for an ADN Nurse in Dallas is $33.36. I will now search for the number of job openings and average salary for an Occupational Therapist Assistant in Dallas, TX.\n",
+ "Action Input: Occupational Therapist Assistant job openings and average salary in Dallas, TXThe average salary for an ADN Registered Nurse in Dallas, TX is approximately $41.11 per hour according to Indeed[1] and the salary range is between $61,746 and $110,745 according to Salary.com[2]. There are about 680 job openings according to one Indeed listing[3] and 1,048 job openings according to another Indeed listing[4]. I will take the average of these two numbers to get an estimate of the number of job openings. Now, I will search for the number of job openings and average salary for an Occupational Therapist Assistant in Dallas, TX.\n",
"\n",
"Action: @bing\n",
- "Action Input: Occupational Therapist Assistant job openings and average salary within 15 miles of Dallas, TX\n",
- "The search results show that there are approximately 256 job openings for Occupational Therapist Assistants within 15 miles of Dallas, TX. The average hourly pay for a Certified Occupational Therapy Assistant in Dallas is $38.87. I will now search for the number of job openings and average salary for a Dental Hygienist in Dallas, TX.\n",
+ "Action Input: Occupational Therapist Assistant job openings and average salary in Dallas, TX\n",
+ "The average salary for an Occupational Therapist Assistant in Dallas, TX is approximately $39.61 per hour according to Indeed[1] and the salary range is between $58,856 and $71,771 according to Salary.com[2]. The search results did not provide a specific number of job openings for this occupation. I will now search for the number of job openings and average salary for a Dental Hygienist in Dallas, TX.\n",
"\n",
"Action: @bing\n",
- "Action Input: Dental Hygienist job openings and average salary within 15 miles of Dallas, TXThe search results show that there are approximately 256 job openings for Occupational Therapist Assistants within 15 miles of Dallas, TX. The average hourly pay for a Certified Occupational Therapy Assistant in Dallas is $38.87. I will now search for the number of job openings and average salary for a Dental Hygienist in Dallas, TX.\n",
+ "Action Input: Dental Hygienist job openings and average salary in Dallas, TXThe average salary for an Occupational Therapist Assistant in Dallas, TX is approximately $39.61 per hour according to Indeed[1] and the salary range is between $58,856 and $71,771 according to Salary.com[2]. The search results did not provide a specific number of job openings for this occupation. I will now search for the number of job openings and average salary for a Dental Hygienist in Dallas, TX.\n",
"\n",
"Action: @bing\n",
- "Action Input: Dental Hygienist job openings and average salary within 15 miles of Dallas, TX\n",
- "The search results show that there are approximately 348 job openings for Dental Hygienists within 15 miles of Dallas, TX. The average hourly pay for a Dental Hygienist in Dallas is $48.11. I will now search for the number of job openings and average salary for a Graphic Designer in Dallas, TX.\n",
+ "Action Input: Dental Hygienist job openings and average salary in Dallas, TX\n",
+ "The average salary for a Dental Hygienist in Dallas, TX is approximately $48.69 per hour according to Indeed[1] and the salary range is between $72,571 and $91,837 according to Salary.com[2]. There are about 75 job openings according to Glassdoor[3]. Now, I will search for the number of job openings and average salary for a Graphic Designer in Dallas, TX.\n",
"\n",
"Action: @bing\n",
- "Action Input: Graphic Designer job openings and average salary within 15 miles of Dallas, TXThe search results show that there are approximately 348 job openings for Dental Hygienists within 15 miles of Dallas, TX. The average hourly pay for a Dental Hygienist in Dallas is $48.11. I will now search for the number of job openings and average salary for a Graphic Designer in Dallas, TX.\n",
+ "Action Input: Graphic Designer job openings and average salary in Dallas, TXThe average salary for a Dental Hygienist in Dallas, TX is approximately $48.69 per hour according to Indeed[1] and the salary range is between $72,571 and $91,837 according to Salary.com[2]. There are about 75 job openings according to Glassdoor[3]. Now, I will search for the number of job openings and average salary for a Graphic Designer in Dallas, TX.\n",
"\n",
"Action: @bing\n",
- "Action Input: Graphic Designer job openings and average salary within 15 miles of Dallas, TX\n",
- "The search results show that there are approximately 146 job openings for Graphic Designers within 15 miles of Dallas, TX. The average hourly pay for a Graphic Designer in Dallas is $34.44. I will now search for the number of job openings and average salary for a Real Estate Agent in Dallas, TX.\n",
+ "Action Input: Graphic Designer job openings and average salary in Dallas, TX\n",
+ "The average salary for a Graphic Designer in Dallas, TX is approximately $21.92 per hour according to Indeed[1] and the salary range is between $48,303 and $56,833 according to Salary.com[2]. The search results did not provide a specific number of job openings for this occupation. I will now search for the number of job openings and average salary for a Real Estate Agent in Dallas, TX.\n",
"\n",
"Action: @bing\n",
- "Action Input: Real Estate Agent job openings and average salary within 15 miles of Dallas, TXThe search results show that there are approximately 146 job openings for Graphic Designers within 15 miles of Dallas, TX. The average hourly pay for a Graphic Designer in Dallas is $34.44. I will now search for the number of job openings and average salary for a Real Estate Agent in Dallas, TX.\n",
+ "Action Input: Real Estate Agent job openings and average salary in Dallas, TXThe average salary for a Graphic Designer in Dallas, TX is approximately $21.92 per hour according to Indeed[1] and the salary range is between $48,303 and $56,833 according to Salary.com[2]. The search results did not provide a specific number of job openings for this occupation. I will now search for the number of job openings and average salary for a Real Estate Agent in Dallas, TX.\n",
"\n",
"Action: @bing\n",
- "Action Input: Real Estate Agent job openings and average salary within 15 miles of Dallas, TX\n",
- "The search results show that there are approximately 302 job openings for Real Estate Agents within 15 miles of Dallas, TX. The average annual salary for a Real Estate Agent in Dallas is $97,851. Now that I have all the information, I can compile it into a table.\n",
+ "Action Input: Real Estate Agent job openings and average salary in Dallas, TX\n",
+ "The average salary for a Real Estate Agent in Dallas, TX is approximately $98,544 per year according to Indeed[1] and the salary range is between $44,572 and $58,036 according to Salary.com[2]. The search results did not provide a specific number of job openings for this occupation. Now, I have all the information I need to create the table.\n",
"\n",
"Final Answer: Here is the comparison of job openings and average salaries for the specified occupations within a 15-mile radius of Dallas, TX:\n",
"\n",
- "| Occupation | Number of Job Openings | Average Salary | Source |\n",
- "| --- | --- | --- | --- |\n",
- "| ADN Registered Nurse | 1,022 | $33.36 per hour[1] | [1] |\n",
- "| Occupational Therapist Assistant | 256 | $38.87 per hour[2] | [2] |\n",
- "| Dental Hygienist | 348 | $48.11 per hour[3] | [3] |\n",
- "| Graphic Designer | 146 | $34.44 per hour[4] | [4] |\n",
- "| Real Estate Agent | 302 | $97,851 per year[5] | [5] |\n",
+ "| Occupation | Job Openings | Average Salary | Sources |\n",
+ "|------------|--------------|----------------|---------|\n",
+ "| ADN Registered Nurse | 864 | $41.11/hr | [1], [2] |\n",
+ "| Occupational Therapist Assistant | N/A | $39.61/hr | [3] |\n",
+ "| Dental Hygienist | 75 | $48.69/hr | [4], [5] |\n",
+ "| Graphic Designer | N/A | $21.92/hr | [6] |\n",
+ "| Real Estate Agent | N/A | $98,544/yr | [7] |\n",
"\n",
- "Please note that these numbers are approximate and can vary."
+ "Please note that the number of job openings for Occupational Therapist Assistant, Graphic Designer, and Real Estate Agent could not be determined from the search results."
]
}
],
@@ -470,9 +471,9 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3.10 - SDK v2",
+ "display_name": ".venv",
"language": "python",
- "name": "python310-sdkv2"
+ "name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -484,7 +485,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.11"
+ "version": "3.11.5"
}
},
"nbformat": 4,
diff --git a/09-API-Search.ipynb b/09-API-Search.ipynb
index 3517db7b..8f39f685 100644
--- a/09-API-Search.ipynb
+++ b/09-API-Search.ipynb
@@ -28,13 +28,13 @@
},
{
"cell_type": "code",
- "execution_count": 1,
+ "execution_count": 2,
"id": "c1fb79a3-4856-4721-988c-112813690a90",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
- "import requests\n",
+ "import requests,os\n",
"from time import sleep\n",
"from typing import Dict, List\n",
"from pydantic import BaseModel, Extra, root_validator\n",
@@ -63,7 +63,7 @@
},
{
"cell_type": "code",
- "execution_count": 2,
+ "execution_count": 3,
"id": "258a6e99-2d4f-4147-b8ee-c64c85296181",
"metadata": {},
"outputs": [],
@@ -74,7 +74,7 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": 4,
"id": "9d3daf03-77e2-466e-a255-2f06bee3561b",
"metadata": {},
"outputs": [],
@@ -132,7 +132,7 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": 5,
"id": "e78960a6-623d-4999-a4e3-89aee5c076de",
"metadata": {},
"outputs": [],
@@ -158,7 +158,7 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 6,
"id": "94503afc-c398-458a-b369-610c5dbe682d",
"metadata": {},
"outputs": [],
@@ -169,7 +169,7 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": 7,
"id": "57d77e9b-6f3f-4ec4-bc01-baac18984937",
"metadata": {},
"outputs": [
@@ -208,7 +208,7 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 8,
"id": "d020b5de-7ebe-4fb9-9b71-f6c71956149d",
"metadata": {},
"outputs": [],
@@ -237,7 +237,7 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": 9,
"id": "96731b5f-988b-49ec-a5c3-3a344b7085da",
"metadata": {},
"outputs": [],
@@ -258,7 +258,7 @@
},
{
"cell_type": "code",
- "execution_count": 9,
+ "execution_count": 10,
"id": "426fab6f-ea04-4c07-8211-d9cc5c70ac8e",
"metadata": {},
"outputs": [],
@@ -282,7 +282,7 @@
},
{
"cell_type": "code",
- "execution_count": 10,
+ "execution_count": 11,
"id": "9f80d2bb-e285-4d30-88c8-5677e86cebe2",
"metadata": {},
"outputs": [
@@ -292,7 +292,7 @@
"'You are given the below API Documentation:\\n{api_docs}\\nUsing this documentation, generate the full API url to call for answering the user question.\\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\\n\\nQuestion:{question}\\nAPI url:'"
]
},
- "execution_count": 10,
+ "execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@@ -303,7 +303,7 @@
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": 12,
"id": "ccc7e9dc-f36b-45e1-867a-1b92d639e941",
"metadata": {},
"outputs": [
@@ -313,7 +313,7 @@
"'You are given the below API Documentation:\\n{api_docs}\\nUsing this documentation, generate the full API url to call for answering the user question.\\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\\n\\nQuestion:{question}\\nAPI url: {api_url}\\n\\nHere is the response from the API:\\n\\n{api_response}\\n\\nSummarize this response to answer the original question.\\n\\nSummary:'"
]
},
- "execution_count": 11,
+ "execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@@ -324,7 +324,7 @@
},
{
"cell_type": "code",
- "execution_count": 12,
+ "execution_count": 14,
"id": "d7f60335-5551-4ee0-ba4e-1cd84f3a9f48",
"metadata": {},
"outputs": [
@@ -336,7 +336,128 @@
"\n",
"\u001b[1m> Entering new APIChain chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mhttps://disease.sh/v3/covid-19/countries/Argentina,USA\u001b[0m\n",
- "\u001b[33;1m\u001b[1;3m[{\"updated\":1702227926201,\"country\":\"Argentina\",\"countryInfo\":{\"_id\":32,\"iso2\":\"AR\",\"iso3\":\"ARG\",\"lat\":-34,\"long\":-64,\"flag\":\"https://disease.sh/assets/img/flags/ar.png\"},\"cases\":10080046,\"todayCases\":0,\"deaths\":130685,\"todayDeaths\":0,\"recovered\":9949361,\"todayRecovered\":0,\"active\":0,\"critical\":0,\"casesPerOneMillion\":219083,\"deathsPerOneMillion\":2840,\"tests\":35716069,\"testsPerOneMillion\":776264,\"population\":46010234,\"continent\":\"South America\",\"oneCasePerPeople\":5,\"oneDeathPerPeople\":352,\"oneTestPerPeople\":1,\"activePerOneMillion\":0,\"recoveredPerOneMillion\":216242.35,\"criticalPerOneMillion\":0},{\"updated\":1702227926182,\"country\":\"USA\",\"countryInfo\":{\"_id\":840,\"iso2\":\"US\",\"iso3\":\"USA\",\"lat\":38,\"long\":-97,\"flag\":\"https://disease.sh/assets/img/flags/us.png\"},\"cases\":109724580,\"todayCases\":0,\"deaths\":1184575,\"todayDeaths\":0,\"recovered\":107596864,\"todayRecovered\":0,\"active\":943141,\"critical\":1538,\"casesPerOneMillion\":327727,\"deathsPerOneMillion\":3538,\"tests\":1186431916,\"testsPerOneMillion\":3543648,\"population\":334805269,\"continent\":\"North America\",\"oneCasePerPeople\":3,\"oneDeathPerPeople\":283,\"oneTestPerPeople\":0,\"activePerOneMillion\":2816.98,\"recoveredPerOneMillion\":321371.48,\"criticalPerOneMillion\":4.59}]\u001b[0m\n",
+ "\u001b[33;1m\u001b[1;3m\n",
+ "\n",
+ "\n",
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ "\n",
+ "disease.sh | 502: Bad gateway\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "
\n",
+ "\n",
+ "\n",
+ "\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -344,9 +465,7 @@
{
"data": {
"text/markdown": [
- "To-date, the amount of people tested in Argentina is 35,716,069 and in the USA is 1,186,431,916. \n",
- "\n",
- "The continent with the most COVID deaths as of today is North America, with a count of 1,184,575 deaths."
+ "The API call to retrieve the information about the number of people tested in Argentina and the USA returned a 502 Bad Gateway error. Therefore, we cannot provide the requested information at this time."
],
"text/plain": [
""
@@ -383,7 +502,7 @@
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": 15,
"id": "d3d155ae-16eb-458a-b2ed-5aa9a9b84ed8",
"metadata": {},
"outputs": [],
@@ -432,7 +551,7 @@
},
{
"cell_type": "code",
- "execution_count": 14,
+ "execution_count": 16,
"id": "2c6cf721-76bb-47b6-aeeb-9ff4ff92b1f4",
"metadata": {
"tags": []
@@ -445,7 +564,7 @@
},
{
"cell_type": "code",
- "execution_count": 15,
+ "execution_count": 18,
"id": "522257ee-9f0c-4260-8713-baf105cea851",
"metadata": {},
"outputs": [],
@@ -456,7 +575,7 @@
},
{
"cell_type": "code",
- "execution_count": 16,
+ "execution_count": 17,
"id": "ca910f71-60fb-4758-b4a9-757e37eb421f",
"metadata": {},
"outputs": [
@@ -468,20 +587,24 @@
"\n",
"Action: @apisearch\n",
"Action Input: COVID-19 testing numbers in Argentina and USA\n",
- "I have found the number of COVID-19 tests conducted in Argentina and the USA. Now, I need to find out which continent has the highest number of COVID-19 deaths and the current count.\n",
+ "The search for COVID-19 testing numbers in Argentina and the USA did not yield any results due to a server error. I will attempt the search again with slightly modified terms.\n",
+ "\n",
+ "Action: @apisearch\n",
+ "Action Input: COVID-19 tests conducted in Argentina and USA\n",
+ "The second attempt to find the number of COVID-19 tests conducted in Argentina and the USA also resulted in an error. I will now try to find the information about the continent with the highest number of COVID-19 deaths.\n",
"\n",
"Action: @apisearch\n",
- "Action Input: Continent with the highest number of COVID-19 deaths\n",
- "I have found the continent with the highest number of COVID-19 deaths, which is Europe. However, I still need to find the current count of deaths in Europe.\n",
+ "Action Input: Continent with highest number of COVID-19 deaths\n",
+ "The search for the continent with the highest number of COVID-19 deaths also resulted in an error. I will attempt the search again with slightly modified terms.\n",
"\n",
"Action: @apisearch\n",
- "Action Input: Current number of COVID-19 deaths in Europe\n"
+ "Action Input: Continent with most COVID-19 deaths\n"
]
},
{
"data": {
"text/markdown": [
- "As of the latest data, Argentina has conducted 35,716,069 COVID-19 tests, while the USA has conducted 1,186,431,916 tests. The continent with the highest number of COVID-19 deaths is Europe, with a current count of 2,086,879 deaths."
+ "I'm sorry, but I am currently unable to retrieve the requested information due to technical issues with the data source. Please try again later."
],
"text/plain": [
""
@@ -494,8 +617,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "CPU times: user 294 ms, sys: 18.6 ms, total: 313 ms\n",
- "Wall time: 38 s\n"
+ "CPU times: user 379 ms, sys: 68.4 ms, total: 447 ms\n",
+ "Wall time: 6min 31s\n"
]
}
],
@@ -540,7 +663,7 @@
},
{
"cell_type": "code",
- "execution_count": 17,
+ "execution_count": 18,
"id": "9782fafa-9453-46be-b9d7-b33088f61ac8",
"metadata": {},
"outputs": [
@@ -548,9 +671,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "Token count: 17477 \n",
+ "Token count: 15064 \n",
"\n",
- "{\"request_info\": {\"success\": true, \"demo\": true}, \"request_parameters\": {\"type\": \"search\", \"ebay_domain\": \"ebay.com\", \"search_term\": \"memory cards\"}, \"request_metadata\": {\"ebay_url\": \"https://www.ebay.com/sch/i.html?_nkw=memory+cards&_sacat=0&_dmd=1&_fcid=1\"}, \"search_results\": [{\"position\": 1, \"title\": \"Sandisk Micro SD Card Memory 32GB 64GB 128GB 256GB 512GB 1TB Lot Extreme Ultra\", \"epid\": \"203914554350\", \"link\": \"https://www.ebay.com/itm/203914554350\", \"image\": \"https://i.ebayimg.com/thumbs/images/g/A7wAAOSwemNjTz~l/s-l300.jpg\", \"condition\": \"Brand New\", \"seller_info\": {\"name\": \"terashack\", \"review_count\": 59000, \"positive_feedback_percent\": 100}, \"is_auction\": false, \"buy_it_now\": false, \"free_returns\": true, \"sponsored\": true, \"prices\": [{\"value\": 9.99, \"raw\": \"$9.99\"}, {\"value\": 438.99, \"raw\": \"$438.99\"}], \"price\": {\"value\": 9.99, \"raw\": \"$9.99\"}}, {\"position\": 2, \"title\": \"SanDisk 512GB Extreme PRO CFexpress Memory Card Type B - SDCFE-512G-ANCIN\", \"epid\": \"295270697902\", \"link\": \"https://www.ebay.com/itm/295270697902\", \"image\": \"https://i.ebayimg.com/thumbs/images/g/gV4AAOSwLvBjRgCq/s-l300.jpg\", \"hotness\": \"Direct from Western Digital\", \"condition\": \"Brand New\", \"seller_info\": {\"name\": \"wd\", \"review_count\": 38128, \"positive_feedback_percent\": 98.9}, \"is_auction\": false, \"buy_it_now\": false, \"free_returns\": true, \"rating\": 5, \"ratings_total\": 1, \"sponsored\": true, \"prices\": [{\"value\": 343.99, \"raw\": \"$343.99\"}], \"price\": {\"value\": 343.99, \"raw\": \"$343.99\"}}, {\"position\": 3, \"title\": \"Sandisk Micro SD Card Ultra Memory 32GB 64GB 128GB 256GB 512GB 1TB Class 10 TF\", \"epid\": \"202535485899\", \"link\": \"https://www.ebay.com/itm/202535485899\", \"image\": \"https://i.ebayimg.com/thumbs/images/g/G~YAAOSw6ktjD8zP/s-l300.jpg\", \"condition\": \"Brand New\", \"seller_info\": {\"name\": \"terashack\", \"review_count\": 59000, \"positive_feedback_percent\": 100}, \"is_auction\": false, \"buy_it_now\": false, \"free_returns\": true, \"sponsored\": true, \"prices\": [{\"value\": 9.99, \"raw\": \"$9.99\"}, {\"val ...\n"
+ "{\"request_info\": {\"success\": true, \"demo\": true}, \"request_parameters\": {\"type\": \"search\", \"ebay_domain\": \"ebay.com\", \"search_term\": \"memory cards\"}, \"request_metadata\": {\"ebay_url\": \"https://www.ebay.com/sch/i.html?_nkw=memory+cards&_sacat=0&_dmd=1&_fcid=1\"}, \"search_results\": [{\"position\": 1, \"title\": \"128GB 256GB 1TB Micro SD Card Memory Card TF Card with Free Adapter High Speed\", \"epid\": \"364200951508\", \"link\": \"https://www.ebay.com/itm/364200951508\", \"image\": \"https://i.ebayimg.com/thumbs/images/g/o6sAAOSw8iBkJp06/s-l300.jpg\", \"condition\": \"Brand New\", \"seller_info\": {\"name\": \"lansuostore\", \"review_count\": 14205, \"positive_feedback_percent\": 98.5}, \"is_auction\": false, \"buy_it_now\": false, \"free_returns\": true, \"sponsored\": true, \"prices\": [{\"value\": 3.88, \"raw\": \"$3.88\"}, {\"value\": 13.19, \"raw\": \"$13.19\"}], \"price\": {\"value\": 3.88, \"raw\": \"$3.88\"}}, {\"position\": 2, \"title\": \"1TB PixelFlash Cfast 2.0 Memory Card 3600X Canon EOS 1DX MK2, Blackmagic, Atomos\", \"epid\": \"174566557968\", \"link\": \"https://www.ebay.com/itm/174566557968\", \"image\": \"https://i.ebayimg.com/thumbs/images/g/2pkAAOSwOZNlqhZZ/s-l300.jpg\", \"condition\": \"Brand New\", \"seller_info\": {\"name\": \"canon_digital_store\", \"review_count\": 3455, \"positive_feedback_percent\": 94.9}, \"is_auction\": false, \"buy_it_now\": false, \"free_returns\": true, \"sponsored\": true, \"prices\": [{\"value\": 269.98, \"raw\": \"$269.98\"}], \"price\": {\"value\": 269.98, \"raw\": \"$269.98\"}}, {\"position\": 3, \"title\": \"Sandisk Micro SD Card Memory 32GB 64GB 128GB 256GB 512GB 1TB Lot Extreme Ultra\", \"epid\": \"203914554350\", \"link\": \"https://www.ebay.com/itm/203914554350\", \"image\": \"https://i.ebayimg.com/thumbs/images/g/A7wAAOSwemNjTz~l/s-l300.jpg\", \"condition\": \"Brand New\", \"seller_info\": {\"name\": \"terashack\", \"review_count\": 59949, \"positive_feedback_percent\": 100}, \"is_auction\": false, \"buy_it_now\": false, \"free_returns\": true, \"sponsored\": true, \"prices\": [{\"value\": 9.99, \"raw\": \"$9.99\"}, {\"value\": 438.99, \"raw\": \"$438.99\"}], \"price\": {\"value\": ...\n"
]
}
],
@@ -583,7 +706,7 @@
},
{
"cell_type": "code",
- "execution_count": 18,
+ "execution_count": 19,
"id": "67c51a32-13f5-4802-84cd-ce40b397cb1b",
"metadata": {},
"outputs": [],
@@ -622,7 +745,7 @@
},
{
"cell_type": "code",
- "execution_count": 19,
+ "execution_count": 20,
"id": "c0daa409-a196-4eae-aaac-b4545d0e3280",
"metadata": {},
"outputs": [],
@@ -637,7 +760,7 @@
},
{
"cell_type": "code",
- "execution_count": 20,
+ "execution_count": 21,
"id": "71a1d824-7257-4a6b-8b0c-cd5176136ac7",
"metadata": {},
"outputs": [
@@ -648,32 +771,23 @@
"The user is asking for the price of SanDisk memory cards and also wants the links to the sources of this information. I will use the @apisearch tool to find this information.\n",
"Action: @apisearch\n",
"Action Input: SanDisk memory card price\n",
- "The search was unsuccessful due to an issue with the API. I should try again to find the information the user is asking for.\n",
+ "The search did not return any results. I should try again with slightly different search terms.\n",
"Action: @apisearch\n",
"Action Input: price of SanDisk memory cards\n",
- "The search was successful and I found several listings for SanDisk memory cards on eBay. The prices vary depending on the type and capacity of the memory card. Here are some examples:\n",
- "\n",
- "1. [SanDisk High Endurance Micro SD Memory Card 32GB 64GB 128GB 256GB V30 C10 CCTV](https://www.ebay.com/itm/204531908628) - Price ranges from USD 16.99 to USD 308.99.\n",
- "2. [Lot 4 x SanDisk 32GB SDHC Class 4 SD Flash Memory Card Camera SDSDB-032G 128GB](https://www.ebay.com/itm/253863195301) - Price is USD 24.99.\n",
- "3. [Lot of 5 SanDisk 16GB SD HC Class 4 Memory Cards Cards SDSDB-016G-A46 US Version](https://www.ebay.com/itm/295554233097) - Price is USD 34.95.\n",
- "4. [SanDisk MicroSDXC Card Bundle 2 Cards and Adapters 64gb & 128gb See Description](https://www.ebay.com/itm/404510262327) - Price is USD 20.\n",
- "5. [Sandisk SD Cards 16GB 32GB 64GB 128GB 256GB Extreme Pro Ultra Memory Cards lot](https://www.ebay.com/itm/324078167020) - Price ranges from USD 7.98 to USD 236.74.\n",
- "\n",
- "Please note that these prices are from eBay and may vary based on the seller and condition of the memory card.\n"
+ "LLM Error: Error code: 429 - {'error': {'code': '429', 'message': 'Requests to the ChatCompletions_Create Operation under Azure OpenAI API version 2023-05-15 have exceeded token rate limit of your current OpenAI S0 pricing tier. Please retry after 50 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.'}}\n",
+ "The user is asking for the price of SanDisk memory cards and also wants the links to the sources of this information. I will use the @apisearch tool to find this information.\n",
+ "Action: @apisearch\n",
+ "Action Input: SanDisk memory card price\n",
+ "The search was unsuccessful. I need to try again with slightly different search terms.\n",
+ "Action: @apisearch\n",
+ "Action Input: price of SanDisk memory cards\n",
+ "LLM Error: Error code: 429 - {'error': {'code': '429', 'message': 'Requests to the ChatCompletions_Create Operation under Azure OpenAI API version 2023-05-15 have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 10 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.'}}\n"
]
},
{
"data": {
"text/markdown": [
- "Here are some examples of SanDisk memory cards and their prices:\n",
- "\n",
- "1. [SanDisk High Endurance Micro SD Memory Card 32GB 64GB 128GB 256GB V30 C10 CCTV](https://www.ebay.com/itm/204531908628) - Price ranges from USD 16.99 to USD 308.99.\n",
- "2. [Lot 4 x SanDisk 32GB SDHC Class 4 SD Flash Memory Card Camera SDSDB-032G 128GB](https://www.ebay.com/itm/253863195301) - Price is USD 24.99.\n",
- "3. [Lot of 5 SanDisk 16GB SD HC Class 4 Memory Cards Cards SDSDB-016G-A46 US Version](https://www.ebay.com/itm/295554233097) - Price is USD 34.95.\n",
- "4. [SanDisk MicroSDXC Card Bundle 2 Cards and Adapters 64gb & 128gb See Description](https://www.ebay.com/itm/404510262327) - Price is USD 20.\n",
- "5. [Sandisk SD Cards 16GB 32GB 64GB 128GB 256GB Extreme Pro Ultra Memory Cards lot](https://www.ebay.com/itm/324078167020) - Price ranges from USD 7.98 to USD 236.74.\n",
- "\n",
- "Please note that these prices are from eBay and may vary based on the seller and condition of the memory card."
+ "Error code: 429 - {'error': {'code': '429', 'message': 'Requests to the ChatCompletions_Create Operation under Azure OpenAI API version 2023-05-15 have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 10 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.'}}"
],
"text/plain": [
""
@@ -731,9 +845,9 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3.10 - SDK v2",
+ "display_name": ".venv",
"language": "python",
- "name": "python310-sdkv2"
+ "name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -745,7 +859,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.11"
+ "version": "3.11.7"
}
},
"nbformat": 4,
diff --git a/10-Smart_Agent.ipynb b/10-Smart_Agent.ipynb
index 2955f9d1..5b459333 100644
--- a/10-Smart_Agent.ipynb
+++ b/10-Smart_Agent.ipynb
@@ -106,7 +106,7 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": 5,
"id": "643d1650-6416-46fd-8b21-f5fb298ec063",
"metadata": {},
"outputs": [],
@@ -122,7 +122,7 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": 6,
"id": "eafd5bf5-28ee-4edd-978b-384cce057257",
"metadata": {},
"outputs": [],
@@ -137,7 +137,7 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 7,
"id": "dec238c0-0a00-4f94-8a12-389221355f16",
"metadata": {},
"outputs": [],
@@ -154,7 +154,7 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": 8,
"id": "0f0ae466-aff8-4cdf-80d3-ef2c61867fc7",
"metadata": {},
"outputs": [],
@@ -165,7 +165,7 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 9,
"id": "78edb304-c4a2-4f10-8ded-936e9141aa02",
"metadata": {},
"outputs": [],
@@ -177,7 +177,7 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": 10,
"id": "b9d54cc5-41bc-43c3-a91d-12fc3a2446ba",
"metadata": {},
"outputs": [],
@@ -188,7 +188,7 @@
},
{
"cell_type": "code",
- "execution_count": 9,
+ "execution_count": 11,
"id": "65465173-92f6-489d-9b48-58d109c5723e",
"metadata": {},
"outputs": [],
@@ -199,7 +199,7 @@
},
{
"cell_type": "code",
- "execution_count": 10,
+ "execution_count": 12,
"id": "1fe2b4a7-4053-4334-867f-e4c916e360b2",
"metadata": {},
"outputs": [],
@@ -261,7 +261,7 @@
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": 13,
"id": "dc11cb35-8817-4dd0-b123-27f9eb032f43",
"metadata": {},
"outputs": [
@@ -270,18 +270,15 @@
"output_type": "stream",
"text": [
"Tool: @docsearch\n",
- "The user is asking for the current weather in Dallas. I'll need to search the web to get the most up-to-date information.\n",
+ "In order to answer this question, I need to access real-time weather data. Since I don't have the ability to access real-time data, I will perform a search to find the current weather in Dallas.\n",
"Action: search knowledge base\n",
- "Action Input: current weather in Dallas\n",
- "The search results do not contain the information needed to answer the user's question about the current weather in Dallas. I'll need to adjust the search terms and try again.\n",
- "Action: search knowledge base\n",
- "Action Input: Dallas weather today\n"
+ "Action Input: current weather in Dallas\n"
]
},
{
"data": {
"text/markdown": [
- "I'm sorry, but I couldn't find the current weather in Dallas."
+ "I'm sorry, but as an AI, I don't have access to real-time data, including current weather conditions. I recommend checking a reliable weather forecasting website or app for the most accurate and up-to-date weather information in Dallas."
],
"text/plain": [
""
@@ -298,7 +295,7 @@
},
{
"cell_type": "code",
- "execution_count": 12,
+ "execution_count": 14,
"id": "473222f1-b423-49f3-98e7-ab70dcf47bd6",
"metadata": {},
"outputs": [
@@ -307,20 +304,22 @@
"output_type": "stream",
"text": [
"Tool: @docsearch\n",
- "The user is asking two related questions about how Covid-19 affects obese people and elderly people. I will need to perform two separate searches to gather information for each question. \n",
+ "The user is asking two related questions about how Covid-19 affects obese people and elderly people. I will need to perform two separate searches to provide a comprehensive answer. \n",
"Action: search knowledge base\n",
"Action Input: How does Covid-19 affect obese people?\n",
- "The search results provide information that obesity is a significant risk factor for severe COVID-19. Obese patients with COVID-19 have been found to have more severe symptoms and a negative prognosis. Immune system activity attenuation and chronic inflammation are implicated in this connection. Lipid peroxidation in patients with metabolic disorder and COVID-19 can affect the prognosis. A study from the UK Intensive Care National Audit and Research Centre indicates that two thirds of people who developed serious or fatal COVID-19-related complications were overweight or obese, with almost 72% of those in critical care units being overweight or obese. The presence of obesity in patients with metabolic associated fatty liver disease (MAFLD) was associated with a 6-fold increased risk of severe COVID-19 illness. Now, I will search for the effects of COVID-19 on elderly people.\n",
+ "The search results provide valuable information on how Covid-19 affects obese people. They show that obesity is a major risk factor for becoming seriously ill with Covid-19, with almost 72% of those in critical care units being either overweight or obese[1]. There is also a growing body of evidence connecting obesity with more severe symptoms and a negative prognosis for Covid-19 patients due to factors such as immune system activity attenuation and chronic inflammation[3]. Now, I will perform a second search to find out how Covid-19 affects elderly people.\n",
"Action: search knowledge base\n",
- "Action Input: How does Covid-19 affect elderly people?\n"
+ "Action Input: How does Covid-19 affect elderly people?\n",
+ "Exception: Error code: 429 - {'error': {'code': '429', 'message': 'Requests to the Embeddings_Create Operation under Azure OpenAI API version 2023-05-15 have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 2 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.'}}\n",
+ "The global impact of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic is significant in terms of public health effects and its long-term socio-economic implications. Among all social groups, the elderly is by far the most affected age group regarding morbidity and mortality. In multiple countries spanning several continents, there are an increasing number of reports referencing the novel coronavirus disease-2019 (COVID-19) spread among nursing homes. These areas are now recognized as potent hotspots regarding the pandemic, which one considers with special regard. Herein, we present currently available data of fatal COVID-19 cases throughout Hungary, along with the analysis of the co-morbidity network. We also report on viral genomic data originating from a nursing home resident. The genomic data was used for viral haplotype network analysis. We emphasize the urgent need for public health authorities to focus on nursing homes and residential service units worldwide, especially in the care of the elderly and infirmed. Our results further emphasize the recent statement released by the World Health Organization (WHO) regarding the vulnerability among seniors and especially the high risk of COVID-19 emergence throughout nursing and social homes.\n"
]
},
{
"data": {
"text/markdown": [
- "Obesity is a significant risk factor for severe COVID-19. Obese patients have more severe symptoms and a negative prognosis due to factors such as immune system activity attenuation, chronic inflammation, and lipid peroxidation. Almost 72% of those in critical care units are either overweight or obese, and the presence of obesity in patients with metabolic associated fatty liver disease (MAFLD) was associated with a 6-fold increased risk of severe COVID-19 illness[1][2].\n",
+ "Obesity is a major risk factor for becoming seriously ill with Covid-19. Almost 72% of those in critical care units are either overweight or obese. Obesity is connected with more severe symptoms and a negative prognosis for Covid-19 patients due to factors like immune system activity attenuation and chronic inflammation[1][3]. \n",
"\n",
- "Elderly people are the most vulnerable to COVID-19 and have the highest mortality rates. The risk of coronavirus infection among elderly people is significantly affected by other age groups. An increase of virus infection among people aged 20 -39 could potentially double the risk of infection among elderly people. The mortality of elderly patients with COVID-19 is higher than that of young and middle-aged patients, and elderly patients with COVID-19 are more likely to progress to severe disease. The outbreak of COVID-19 also has an effect on the psychology of the elderly, resulting in anxiety and depression[3][4][5]."
+ "On the other hand, elderly people are at a higher risk of infection and more serious illness from Covid-19. The risk of infection among elderly people can double with an increase in virus infection among people aged 20-39. The risk of mortality from Covid-19 increases with age, being 3.6% for people in their 60s, which increases to 8.0% and 14.8% for people in their 70s and over 80s, respectively. Elderly patients with Covid-19 are more likely to progress to severe disease[1][2][3][4]."
],
"text/plain": [
""
@@ -337,7 +336,7 @@
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": 15,
"id": "5b1a8577-ac34-44ca-91ca-379a6647eb88",
"metadata": {},
"outputs": [
@@ -346,15 +345,18 @@
"output_type": "stream",
"text": [
"Tool: @booksearch\n",
- "The user is asking for the acronym that represents the main point of the book \"Made to Stick\". I don't have this information readily available, so I will perform a search to find it.\n",
+ "The user is asking for the acronym that summarizes the main point of the book \"Made to Stick\". I don't have this information readily available, so I will need to perform a search to find it.\n",
"Action: search knowledge base\n",
- "Action Input: Main point acronym of the book Made to Stick\n"
+ "Action Input: Main point acronym of the book Made to Stick\n",
+ "The search did not return any results. I will try again with slightly different search terms to see if I can find the information the user is asking for.\n",
+ "Action: search knowledge base\n",
+ "Action Input: Summary acronym of the book Made to Stick\n"
]
},
{
"data": {
"text/markdown": [
- "The acronym that represents the main point of the book \"Made to Stick\" is SUCCESs, which stands for Simple, Unexpected, Concrete, Credible, Emotional, Stories[1]."
+ "I'm sorry, but I couldn't find the acronym that summarizes the main point of the book \"Made to Stick\"."
],
"text/plain": [
""
@@ -370,7 +372,7 @@
},
{
"cell_type": "code",
- "execution_count": 14,
+ "execution_count": 16,
"id": "03839591-553c-46a0-846a-1c4fb96bf851",
"metadata": {},
"outputs": [
@@ -379,10 +381,10 @@
"output_type": "stream",
"text": [
"Tool: @bing\n",
- "The user is asking for the names of the family members of the current president of India. I need to first identify who the current president of India is, and then find information about their family.\n",
+ "The question is asking for the names of the family members of the current president of India. I need to first determine who the current president of India is and then search for information on their family members.\n",
"Action: @bing\n",
- "Action Input: current president of India\n",
- "The current President of India is Droupadi Murmu, as indicated by the search results. Now, I need to search for information about her family members.\n",
+ "Action Input: Current president of India\n",
+ "The current president of India is Droupadi Murmu. Now I need to search for information about her family members.\n",
"Action: @bing\n",
"Action Input: Droupadi Murmu family members\n"
]
@@ -390,7 +392,7 @@
{
"data": {
"text/markdown": [
- "The current President of India, Droupadi Murmu, has a daughter named **Itishree Murmu**. She tragically lost her husband, two sons, mother, and brother between the years 2009-2015. Her father's name was **Biranchi Narayan Tudu**[1]. Is there anything else you would like to know?"
+ "The family members of the current president of India, Droupadi Murmu, include her husband Shyam Charan Murmu, her sons Laxman Murmu and Sipun Murmu, and her daughter Itishree Murmu. She also had another daughter who passed away at the age of 3. Her father is Biranchi Narayan Tudu[1]."
],
"text/plain": [
""
@@ -407,7 +409,7 @@
},
{
"cell_type": "code",
- "execution_count": 15,
+ "execution_count": 17,
"id": "bc64f3ee-96e4-4007-8a3c-2f017a615587",
"metadata": {},
"outputs": [
@@ -416,10 +418,11 @@
"output_type": "stream",
"text": [
"Tool: @csvfile\n",
- "Thought: To find out the number of rows in a dataframe, we can use the shape attribute or len() function in Python. \n",
+ "Thought: \n",
+ "To determine the number of rows in the dataframe, I can use the shape attribute which returns a tuple representing the dimensionality of the DataFrame. The first element of the tuple represents the number of rows and the second element represents the number of columns. \n",
"Action: python_repl_ast\n",
"Action Input: df.shape[0]\n",
- "The shape attribute of the dataframe returns a tuple representing the dimensionality of the dataframe. The first element of the tuple is the number of rows. Therefore, the dataframe has 20780 rows. However, to confirm this, I will also use the len() function.\n",
+ "The shape attribute indicates that there are 20780 rows in the dataframe. However, to confirm this, I will use another method - the len() function, which returns the number of items in an object. When used with a dataframe, it returns the number of rows.\n",
"Action: python_repl_ast\n",
"Action Input: len(df)\n"
]
@@ -427,9 +430,10 @@
{
"data": {
"text/markdown": [
- "The dataframe has 20780 rows.\n",
+ "The file has 20780 rows.\n",
"\n",
- "Explanation: I used the shape attribute and len() function of the dataframe to find out the number of rows. Both methods returned the same result, confirming the number of rows in the dataframe."
+ "Explanation:\n",
+ "I used both the shape attribute and the len() function of the dataframe to determine the number of rows. Both methods returned the same result, confirming that the dataframe has 20780 rows."
],
"text/plain": [
""
@@ -446,7 +450,7 @@
},
{
"cell_type": "code",
- "execution_count": 16,
+ "execution_count": 18,
"id": "c809f8d7-2ed9-46d8-a73c-118da063cace",
"metadata": {},
"outputs": [
@@ -457,39 +461,33 @@
"Tool: @sqlsearch\n",
"Action: sql_db_list_tables\n",
"Action Input: \"\"\n",
- "The `covidtracking` table seems to be the relevant one for this query since it likely contains data about covid deaths. I should check its schema to understand its structure and the data it contains.\n",
+ "The `covidtracking` table seems to be the most relevant for this task since it likely contains data related to COVID-19 cases and deaths. I should check the schema of this table to confirm.\n",
"Action: sql_db_schema\n",
"Action Input: covidtracking\n",
- "The `covidtracking` table contains a `death` column and a `state` column, which are relevant to the question. The `date` column will help to filter the data for July 2020. I will query this table to get the total deaths for the states of the west coast (California, Oregon, and Washington) in July 2020.\n",
+ "The `covidtracking` table contains the columns `date`, `state`, and `death`, which are relevant to the question. I should query this table to find the total number of deaths in each state of the west coast (California, Oregon, and Washington) in July 2020. I need to sum the `deathIncrease` column for each state for the month of July 2020.\n",
"Action: sql_db_query_checker\n",
- "Action Input: \n",
- "SELECT state, SUM(deathIncrease) as total_deaths\n",
- "FROM covidtracking\n",
- "WHERE (state = 'CA' OR state = 'OR' OR state = 'WA') AND date LIKE '2020-07%'\n",
- "GROUP BY state\n",
- "The query syntax appears to be correct. Now, I will run the query on the database to get the total deaths in each west coast state in July 2020.\n",
+ "Action Input: SELECT state, SUM(deathIncrease) as TotalDeaths FROM covidtracking WHERE (state = 'CA' OR state = 'OR' OR state = 'WA') AND date LIKE '2020-07%' GROUP BY state\n",
+ "The query syntax is correct. Now I will run the query to get the total number of deaths in each of the west coast states in July 2020.\n",
"Action: sql_db_query\n",
- "Action Input: \n",
- "SELECT state, SUM(deathIncrease) as total_deaths\n",
- "FROM covidtracking\n",
- "WHERE (state = 'CA' OR state = 'OR' OR state = 'WA') AND date LIKE '2020-07%'\n",
- "GROUP BY state\n"
+ "Action Input: SELECT state, SUM(deathIncrease) as TotalDeaths FROM covidtracking WHERE (state = 'CA' OR state = 'OR' OR state = 'WA') AND date LIKE '2020-07%' GROUP BY state\n"
]
},
{
"data": {
"text/markdown": [
- "In July 2020, California had 3025 deaths, Oregon had 112 deaths, and Washington had 244 deaths.\n",
+ "The number of people who died in each state of the west coast in July 2020 is as follows:\n",
"\n",
- "Explanation:\n",
- "I queried the `covidtracking` table for the sum of the `deathIncrease` column (which represents the total deaths) for each state in the west coast (California, Oregon, and Washington) for the dates that start with '2020-07' (which represents July 2020). The query returned a list of tuples with the state and the total deaths for that state in July 2020. \n",
+ "- California (CA): 3,200 deaths\n",
+ "- Oregon (OR): 94 deaths\n",
+ "- Washington (WA): 362 deaths\n",
"\n",
- "I used the following query\n",
+ "Explanation:\n",
+ "I queried the `covidtracking` table for the sum of the `deathIncrease` column where the state is either 'CA', 'OR', or 'WA' and the date starts with '2020-07'. This returned the total number of deaths in each state for the month of July 2020. I used the following query:\n",
"\n",
"```sql\n",
- "SELECT state, SUM(deathIncrease) as total_deaths\n",
- "FROM covidtracking\n",
- "WHERE (state = 'CA' OR state = 'OR' OR state = 'WA') AND date LIKE '2020-07%'\n",
+ "SELECT state, SUM(deathIncrease) as TotalDeaths \n",
+ "FROM covidtracking \n",
+ "WHERE (state = 'CA' OR state = 'OR' OR state = 'WA') AND date LIKE '2020-07%' \n",
"GROUP BY state\n",
"```"
],
@@ -508,7 +506,7 @@
},
{
"cell_type": "code",
- "execution_count": 17,
+ "execution_count": 19,
"id": "f70501c2-03d0-4072-b451-ddb92f4add56",
"metadata": {},
"outputs": [
@@ -522,38 +520,41 @@
{
"data": {
"text/markdown": [
- "In Python, the `random` module provides functionality for generating random numbers. Here's how you can use it:\n",
- "\n",
- "1. **Import the random module**: You first need to import the random module.\n",
+ "Python provides a built-in module named `random` that can be used to generate random numbers. Here are some of the functions you can use:\n",
"\n",
- " ```python\n",
- " import random\n",
- " ```\n",
+ "1. `random.random()`: This function returns a random floating point number in the range [0.0, 1.0).\n",
"\n",
- "2. **Generate a random number**: There are several ways to generate a random number.\n",
+ "Example:\n",
+ "```python\n",
+ "import random\n",
+ "print(random.random())\n",
+ "```\n",
"\n",
- " - **Random float**: `random.random()` generates a random float number between 0.0 to 1.0. The function does not need any arguments.\n",
+ "2. `random.uniform(a, b)`: This function returns a random floating point number within the range [a, b).\n",
"\n",
- " ```python\n",
- " random_num = random.random()\n",
- " print(random_num)\n",
- " ```\n",
+ "Example:\n",
+ "```python\n",
+ "import random\n",
+ "print(random.uniform(1, 10))\n",
+ "```\n",
"\n",
- " - **Random integer**: `random.randint(a, b)` generates a random integer between a and b. Both the end points are inclusive.\n",
+ "3. `random.randint(a, b)`: This function returns a random integer N in the range [a, b], including both end points.\n",
"\n",
- " ```python\n",
- " random_num = random.randint(1, 10)\n",
- " print(random_num)\n",
- " ```\n",
+ "Example:\n",
+ "```python\n",
+ "import random\n",
+ "print(random.randint(1, 10))\n",
+ "```\n",
"\n",
- " - **Random float within a range**: `random.uniform(a, b)` generates a random float number between a and b.\n",
+ "4. `random.randrange(start, stop, step)`: This function returns a randomly selected element from the range created by the start, stop and step arguments. The value of stop is one past the end of the range.\n",
"\n",
- " ```python\n",
- " random_num = random.uniform(1, 10)\n",
- " print(random_num)\n",
- " ```\n",
+ "Example:\n",
+ "```python\n",
+ "import random\n",
+ "print(random.randrange(0, 101, 2)) # Even number between 0 to 100\n",
+ "```\n",
"\n",
- "Remember that the `random` module generates pseudo-random numbers, which means they are not truly random but they are enough for most purposes. If you need a truly random number for security or cryptographic uses, you should use the `secrets` module."
+ "Remember to always import the `random` module before using these functions."
],
"text/plain": [
""
@@ -570,7 +571,7 @@
},
{
"cell_type": "code",
- "execution_count": 18,
+ "execution_count": 20,
"id": "006f3429-65a9-44f9-b67a-2a25cbde2846",
"metadata": {},
"outputs": [
@@ -579,23 +580,21 @@
"output_type": "stream",
"text": [
"Tool: @apisearch\n",
- "The user is asking for specific COVID-19 data for China, USA, and Brazil. They want to know the number of cases, deaths, and the ratio of deaths to cases expressed as a percentage. They also want this information presented in a table format. I need to use the @apisearch tool to fetch the most recent and accurate data for these countries.\n",
+ "The user is asking for specific data regarding COVID-19 in China, USA, and Brazil. They want to know the number of cases and deaths in each country, as well as the ratio of deaths to cases. They also request this information to be presented in a table format. I should use my @apisearch tool to find the most recent and accurate data. \n",
+ "Action: @apisearch\n",
+ "Action Input: COVID-19 cases and deaths in China\n",
+ "Unfortunately, I was unable to retrieve the COVID-19 data for China due to a server error. I should try to fetch the data for the other two countries, USA and Brazil, and inform the user about the issue with the data for China.\n",
"Action: @apisearch\n",
- "Action Input: COVID-19 data for China, USA, and Brazil\n"
+ "Action Input: COVID-19 cases and deaths in USA\n",
+ "The search for COVID-19 data for the USA also resulted in an error. The server did not respond in time. I should now try to retrieve the data for Brazil and inform the user about the issues with the data for the USA and China.\n",
+ "Action: @apisearch\n",
+ "Action Input: COVID-19 cases and deaths in Brazil\n"
]
},
{
"data": {
"text/markdown": [
- "Here's the data presented in a table:\n",
- "\n",
- "| Country | Cases | Deaths | Ratio % (Deaths/Cases) |\n",
- "|---------|------------|----------|------------------------|\n",
- "| China | 503,302 | 5,272 | 1.05% |\n",
- "| USA | 109,724,580| 1,184,575| 1.08% |\n",
- "| Brazil | 38,078,411 | 707,789 | 1.86% |\n",
- "\n",
- "Please note that these ratios are approximations and the actual numbers may vary slightly due to rounding."
+ "I'm sorry, but I was unable to retrieve the COVID-19 data for China, the USA, and Brazil due to server issues. Please try again later."
],
"text/plain": [
""
@@ -622,7 +621,7 @@
},
{
"cell_type": "code",
- "execution_count": 19,
+ "execution_count": 21,
"id": "d018c884-5c91-4a35-90e3-6a5a6e510c25",
"metadata": {},
"outputs": [],
@@ -648,7 +647,7 @@
},
{
"cell_type": "code",
- "execution_count": 20,
+ "execution_count": 22,
"id": "502e8b37-7d17-4e0c-84ca-655ff88a30e8",
"metadata": {},
"outputs": [],
@@ -667,7 +666,7 @@
},
{
"cell_type": "code",
- "execution_count": 21,
+ "execution_count": 23,
"id": "a6314c17-281e-4db8-a5ea-f2579c508454",
"metadata": {},
"outputs": [],
@@ -679,7 +678,7 @@
},
{
"cell_type": "code",
- "execution_count": 22,
+ "execution_count": 24,
"id": "ea0f1d3e-831e-4ee3-8ee5-c01a235d857b",
"metadata": {},
"outputs": [
@@ -739,7 +738,7 @@
},
{
"cell_type": "code",
- "execution_count": 23,
+ "execution_count": 25,
"id": "8fe7b39c-3913-4633-a47b-e2dcd6fccc51",
"metadata": {},
"outputs": [
@@ -823,14 +822,14 @@
},
{
"cell_type": "code",
- "execution_count": 24,
+ "execution_count": 26,
"id": "4b37988b-9fb4-4958-bc17-d58d8dac8bb7",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "I'm an artificial intelligence, so I don't experience feelings or states of being. However, I'm here and ready to assist you. How can I help you today?"
+ "I'm an artificial intelligence, so I don't have feelings, but I'm here and ready to assist you. How can I help you today?"
],
"text/plain": [
""
@@ -847,7 +846,7 @@
},
{
"cell_type": "code",
- "execution_count": 25,
+ "execution_count": 27,
"id": "e4c89051-f947-4329-9bf6-14e3023236fd",
"metadata": {},
"outputs": [
@@ -871,7 +870,7 @@
},
{
"cell_type": "code",
- "execution_count": 26,
+ "execution_count": 28,
"id": "ebdc3ad9-ad59-4135-87f6-e86728a11b71",
"metadata": {},
"outputs": [
@@ -880,33 +879,43 @@
"output_type": "stream",
"text": [
"Tool: @bing\n",
- "The user is asking for Italian and Sushi restaurants in downtown Chicago. I'll use the @bing tool to search for this information.\n",
+ "The user is asking for Italian and Sushi restaurants located in downtown Chicago. I'll perform a web search to gather a list of such restaurants.\n",
"Action: @bing\n",
"Action Input: Italian and Sushi restaurants in downtown Chicago\n",
- "The search results provided information about Italian and Sushi restaurants in downtown Chicago. I found that \"Rosebud on Randolph\" is a highly rated Italian restaurant[1], and \"SUSHI-SAN - River North\" is a well-regarded sushi restaurant[2]. However, to provide a more comprehensive response, I will perform two more searches, one specifically for Italian restaurants and another one for Sushi restaurants in downtown Chicago.\n",
+ "I found some search results that list sushi restaurants in downtown Chicago. However, I still need to search for Italian restaurants in the same area. I'll perform another search to gather this information.\n",
"Action: @bing\n",
- "Action Input: Best Italian restaurants in downtown Chicago\n",
- "The search results provided additional options for Italian restaurants in downtown Chicago. \"Rosebud Rosetta Italian\" is another exceptional Italian restaurant[1]. I also found \"Monteverde\" as a highly recommended Italian restaurant in Chicago[4]. Now, I will perform a search specifically for Sushi restaurants in downtown Chicago.\n",
- "Action: @bing\n",
- "Action Input: Best Sushi restaurants in downtown Chicago\n"
+ "Action Input: Italian restaurants in downtown Chicago\n"
]
},
{
"data": {
"text/markdown": [
- "Here are some highly rated Italian and Sushi restaurants in downtown Chicago:\n",
+ "There are several highly rated Italian and Sushi restaurants in downtown Chicago. \n",
"\n",
- "For Italian cuisine:\n",
- "1. **Rosebud on Randolph**[1]\n",
- "2. **Rosebud Rosetta Italian**[1]\n",
- "3. **Monteverde**[4]\n",
+ "For Sushi, you might want to consider:\n",
+ "- Sushi Plus Rotary Sushi Bar\n",
+ "- Nobu Chicago\n",
+ "- Sushi Taku\n",
+ "- Sunda Chicago\n",
+ "- SUSHI-SAN\n",
+ "- Q Sushi Bar & Omakase\n",
+ "- Ikigai Sushi & Izakaya\n",
+ "- Yuzu Sushi & Robata Grill\n",
+ "- Roka Akor - Chicago\n",
+ "- KAI ZAN[1]\n",
"\n",
- "For Sushi:\n",
- "1. **SUSHI-SAN - River North**[2]\n",
- "2. **Union Sushi + Barbeque Bar**[3]\n",
- "3. **SUSHI-SAN** and **Q Sushi Bar & Omakase**[4]\n",
+ "For Italian, consider:\n",
+ "- Viaggio Ristorante & Lounge\n",
+ "- Volare Ristorante Italiano\n",
+ "- Il Porcellino\n",
+ "- Sapori Trattoria\n",
+ "- Maggiano’s Little Italy\n",
+ "- Nonna Silvia’s Trattoria & Pizzeria\n",
+ "- La Scarola\n",
+ "- Monteverde\n",
+ "- Piccolo Sogno[2]\n",
"\n",
- "Please note that restaurant availability and hours may vary. It's always a good idea to check ahead and make reservations if needed. Enjoy your meal!"
+ "Please check their individual websites or call ahead for hours of operation and reservation information. Enjoy your meal! Is there anything else you need help with?"
],
"text/plain": [
""
@@ -922,14 +931,14 @@
},
{
"cell_type": "code",
- "execution_count": 27,
+ "execution_count": 29,
"id": "7d0b33f9-75fa-4a3e-b9d8-8fd30dbfd3fc",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
- "The formula for momentum in physics is given by the product of an object's mass and its velocity. It is usually represented as: **p = mv**, where **p** is the momentum, **m** is the mass of the object, and **v** is its velocity."
+ "The formula for momentum in physics is p = mv, where 'p' stands for momentum, 'm' is the mass of the object, and 'v' is its velocity."
],
"text/plain": [
""
@@ -945,7 +954,7 @@
},
{
"cell_type": "code",
- "execution_count": 28,
+ "execution_count": 31,
"id": "94f354eb-884d-4fd3-842e-a8adc3b09a70",
"metadata": {},
"outputs": [
@@ -954,20 +963,25 @@
"output_type": "stream",
"text": [
"Tool: @docsearch\n",
- "Markov Chains are a mathematical concept used in various fields such as physics, chemistry, economics, and computer science. They are used to model systems that follow a certain set of rules known as the Markov property. However, to provide a comprehensive answer, I need to search for specific use cases of Markov Chains.\n",
+ "Markov chains have a wide range of applications across various fields. However, I need more specific information to provide a detailed answer. I will search the knowledge base to find out more about the applications of Markov chains.\n",
"Action: search knowledge base\n",
- "Action Input: use cases of Markov Chains\n"
+ "Action Input: applications of Markov chains\n"
]
},
{
"data": {
"text/markdown": [
- "Markov Chains are used in various applications, including:\n",
+ "Markov chains have a wide range of applications, including:\n",
+ "\n",
+ "1. Bayesian Markov Chain Monte Carlo-based inference in stochastic models for modeling noisy epidemic data. This is particularly useful when only partial information about the epidemic process is available[1].\n",
+ "\n",
+ "2. Studying the relationship between functional inequalities for a Markov kernel on a metric space and inequalities of transportation distances on the space of probability measures. Applications include results on the convergence of Markov processes to equilibrium, and on quasi-invariance of heat kernel measures in finite and infinite-dimensional groups[2].\n",
"\n",
- "1. Analyzing and understanding the behavior of epidemics, such as the Covid-19 pandemic. The nonlinear Markov chain model is used to estimate the daily new Covid-19 cases in various countries[1].\n",
- "2. Predicting transient particle transport in enclosed environments. The Markov chain method can provide faster-than-real-time information about particle transport in enclosed environments and reduce computing costs[2].\n",
- "3. Fast predicting transient particle transport indoors. The fast fluid dynamics (FFD) and Markov chain model is used to greatly reduce the computing cost for predicting transient particle transport in indoor environments[3].\n",
- "4. Modeling systems of semi-autonomous computational entities or agents. Interacting Markov Chains correspond to the situation when agents are not mutually independent but interact with each other in some way, often when agents try collectively to perform some task or achieve a desired goal[4]."
+ "3. Integer-valued GARCH processes, where the count variable conditioned on past values of the count and state variables follows a Skellam distribution. This process has a unique stationary regime and shows asymptotic regularity[3].\n",
+ "\n",
+ "4. Analysing and understanding the behaviour of the Covid-19 pandemic. A nonlinear Markov chains model was used to estimate the daily new Covid-19 cases in several countries[4].\n",
+ "\n",
+ "These applications demonstrate the versatility of Markov chains in solving complex problems and optimizing decision-making processes."
],
"text/plain": [
""
@@ -983,7 +997,7 @@
},
{
"cell_type": "code",
- "execution_count": 29,
+ "execution_count": 33,
"id": "badebc1b-dbfe-4a92-93bd-9ff214c34e75",
"metadata": {},
"outputs": [
@@ -993,29 +1007,29 @@
"text": [
"Tool: @sqlsearch\n",
"Action: sql_db_list_tables\n",
- "Action Input: \"\"\n",
- "The `covidtracking` table seems to be the most relevant one for this question. I should look at its schema to understand its structure and find the appropriate columns to query.\n",
+ "Action Input: \n",
+ "The table covidtracking seems relevant to the question as it might contain information about Covid deaths. I should query the schema of this table to know more about the data it contains.\n",
"Action: sql_db_schema\n",
- "Action Input: \"covidtracking\" \n",
- "The `covidtracking` table has a `death` column which seems to be the total number of deaths for each day, a `state` column to filter for Texas, and a `date` column to filter for 2020. I will query the `death` column where `state` is 'TX' and `date` starts with '2020'. I will order the results by date in descending order and limit the results to 1, as the latest date should have the cumulative number of deaths for the year.\n",
+ "Action Input: covidtracking\n",
+ "The 'covidtracking' table contains a 'death' column which likely contains the number of deaths, a 'state' column that likely contains the state, and a 'date' column that likely contains the date. I can use these columns to answer the question. I will write a query to sum the 'death' column where 'state' is 'TX' and 'date' starts with '2020'.\n",
"Action: sql_db_query_checker\n",
- "Action Input: \"SELECT TOP (1) [death] FROM covidtracking WHERE state = 'TX' AND date LIKE '2020%' ORDER BY date DESC\"\n",
- "The query syntax is correct. I can now execute it to get the number of covid deaths in Texas in 2020.\n",
+ "Action Input: SELECT SUM(death) FROM covidtracking WHERE state = 'TX' AND date LIKE '2020%'\n",
+ "The query syntax is correct. Now I will run the query to get the total number of deaths in Texas in 2020.\n",
"Action: sql_db_query\n",
- "Action Input: \"SELECT TOP (1) [death] FROM covidtracking WHERE state = 'TX' AND date LIKE '2020%' ORDER BY date DESC\"\n"
+ "Action Input: SELECT SUM(death) FROM covidtracking WHERE state = 'TX' AND date LIKE '2020%'\n"
]
},
{
"data": {
"text/markdown": [
- "There were 27437 people who died of covid in Texas in 2020.\n",
+ "The database does not contain information on the number of people who died of covid in Texas in 2020.\n",
"\n",
"Explanation:\n",
- "I queried the `covidtracking` table for the `death` column where the state is 'TX' and the date starts with '2020'. The query returned a list of tuples with the number of deaths for each day in 2020. To answer the question, I took the sum of all the deaths in the list, which is 27437. \n",
+ "I queried the `covidtracking` table for the sum of the `death` column where the state is 'TX' and the date starts with '2020'. The query returned None which means there were no records for Texas in 2020 in the covidtracking table. \n",
"I used the following query\n",
"\n",
"```sql\n",
- "SELECT TOP (1) [death] FROM covidtracking WHERE state = 'TX' AND date LIKE '2020%' ORDER BY date DESC\n",
+ "SELECT SUM(death) FROM covidtracking WHERE state = 'TX' AND date LIKE '2020%'\n",
"```"
],
"text/plain": [
@@ -1032,49 +1046,17 @@
},
{
"cell_type": "code",
- "execution_count": 30,
+ "execution_count": null,
"id": "fe325734-e684-46ae-9f2c-604f53453615",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Tool: @apisearch\n",
- "The user is asking for information about the number of COVID-19 tests conducted in the USA and UK, and wants me to calculate the percentage of tests relative to the population of each country. I'll need to use the @apisearch tool to find the number of COVID-19 tests conducted in the USA and UK, and the population of each country. \n",
- "Action: @apisearch\n",
- "Action Input: Number of COVID-19 tests conducted in USA\n",
- "Now that I have the number of COVID-19 tests conducted in the USA, I need to find the population of the USA.\n",
- "Action: @apisearch\n",
- "Action Input: Population of USA\n",
- "Now that I have the number of COVID-19 tests conducted and the population of the USA, I need to find the number of COVID-19 tests conducted in the UK.\n",
- "Action: @apisearch\n",
- "Action Input: Number of COVID-19 tests conducted in UK\n",
- "Now that I have the number of COVID-19 tests conducted in the UK, I need to find the population of the UK.\n",
- "Action: @apisearch\n",
- "Action Input: Population of UK\n"
- ]
- },
- {
- "data": {
- "text/markdown": [
- "Based on the data obtained, in the USA, approximately 354.49% of the population has been tested for COVID-19. This percentage is greater than 100% because multiple tests can be performed on the same individual. In the UK, this percentage is even higher, at approximately 763.04%. Therefore, the UK has a larger percentage of its population tested for COVID-19 compared to the USA."
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
+ "outputs": [],
"source": [
"printmd(run_agent(\"@apisearch, Tell me the data of covid tests administered for USA and UK, and who has the biggest % based on their population.\", brain_agent_executor))"
]
},
{
"cell_type": "code",
- "execution_count": 31,
+ "execution_count": null,
"id": "410d398b-d589-4352-8c42-2df5be173498",
"metadata": {},
"outputs": [
@@ -1119,7 +1101,7 @@
},
{
"cell_type": "code",
- "execution_count": 32,
+ "execution_count": null,
"id": "1fcd6749-b36d-4b5c-be9c-e2f02f34d230",
"metadata": {},
"outputs": [
@@ -1163,7 +1145,7 @@
},
{
"cell_type": "code",
- "execution_count": 33,
+ "execution_count": null,
"id": "080cc28e-2130-4c79-ba7d-0ed702f0ea7a",
"metadata": {},
"outputs": [
@@ -1196,7 +1178,7 @@
},
{
"cell_type": "code",
- "execution_count": 34,
+ "execution_count": null,
"id": "b82d20c5-4591-4d94-8af7-bae614685874",
"metadata": {},
"outputs": [
@@ -1220,7 +1202,7 @@
},
{
"cell_type": "code",
- "execution_count": 35,
+ "execution_count": null,
"id": "a5ded8d9-0bfe-4e16-be3f-382271c120a9",
"metadata": {},
"outputs": [
@@ -1243,7 +1225,7 @@
},
{
"cell_type": "code",
- "execution_count": 36,
+ "execution_count": null,
"id": "89e27665-4006-4ffe-b19e-3eae3636fae7",
"metadata": {},
"outputs": [],
@@ -1295,9 +1277,9 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3.10 - SDK v2",
+ "display_name": ".venv",
"language": "python",
- "name": "python310-sdkv2"
+ "name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -1309,7 +1291,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.11"
+ "version": "3.11.7"
}
},
"nbformat": 4,
diff --git a/common/requirements.txt b/common/requirements.txt
index f262ca92..0260d489 100644
--- a/common/requirements.txt
+++ b/common/requirements.txt
@@ -1,3 +1,6 @@
+requests
+pandas
+jinja2
langchain==0.0.347
langchain-experimental==0.0.44
openai==1.3.7
diff --git a/credentials.env b/credentials.env
index 5ecbde28..1b0a3098 100644
--- a/credentials.env
+++ b/credentials.env
@@ -4,26 +4,28 @@ AZURE_OPENAI_API_VERSION="2023-05-15"
BING_SEARCH_URL = "https://api.bing.microsoft.com/v7.0/search"
# Demo Data (edit with your own if you want to use your own data)
+#BLOB_CONNECTION_STRING="BlobEndpoint=https://datasetsgptsmartsearch.blob.core.windows.net/;QueueEndpoint=https://datasetsgptsmartsearch.queue.core.windows.net/;FileEndpoint=https://datasetsgptsmartsearch.file.core.windows.net/;TableEndpoint=https://datasetsgptsmartsearch.table.core.windows.net/;SharedAccessSignature=sv=2022-11-02&ss=b&srt=sco&sp=rl&se=2026-01-03T02:11:44Z&st=2024-01-02T18:11:44Z&spr=https&sig=ngrEqvqBVaxyuSYqgPVeF%2B9c0fXLs94v3ASgwg7LDBs%3D"
+
BLOB_CONNECTION_STRING="BlobEndpoint=https://datasetsgptsmartsearch.blob.core.windows.net/;SharedAccessSignature=sv=2022-11-02&ss=b&srt=sco&sp=rl&se=2026-01-03T02:11:44Z&st=2024-01-02T18:11:44Z&spr=https&sig=ngrEqvqBVaxyuSYqgPVeF%2B9c0fXLs94v3ASgwg7LDBs%3D"
BLOB_SAS_TOKEN="?sv=2022-11-02&ss=b&srt=sco&sp=rl&se=2026-01-03T02:11:44Z&st=2024-01-02T18:11:44Z&spr=https&sig=ngrEqvqBVaxyuSYqgPVeF%2B9c0fXLs94v3ASgwg7LDBs%3D"
# Edit with your own azure services values
-AZURE_SEARCH_ENDPOINT="Enter your Azure Cognitive Search Endpoint ..."
-AZURE_SEARCH_KEY="Enter your Azure Cognitive Search Key ..." # Make sure is the MANAGEMENT KEY no the query key
-COG_SERVICES_NAME="Enter your Cognitive Services Name, note: not the Endpoint ..."
-COG_SERVICES_KEY="Enter your Cognitive Services Key ..."
-FORM_RECOGNIZER_ENDPOINT="ENTER YOUR VALUE" # Azure Document Intelligence API (former Form Recognizer)
-FORM_RECOGNIZER_KEY="ENTER YOUR VALUE"
-AZURE_OPENAI_ENDPOINT="ENTER YOUR VALUE"
-AZURE_OPENAI_API_KEY="ENTER YOUR VALUE"
-BING_SUBSCRIPTION_KEY="ENTER YOUR VALUE"
-SQL_SERVER_NAME="ENTER YOUR VALUE" # For Azure SQL, make sure it includes .database.windows.net at the end
-SQL_SERVER_DATABASE="ENTER YOUR VALUE"
-SQL_SERVER_USERNAME="ENTER YOUR VALUE"
-SQL_SERVER_PASSWORD="ENTER YOUR VALUE"
-AZURE_COSMOSDB_ENDPOINT="ENTER YOUR VALUE"
-AZURE_COSMOSDB_NAME="ENTER YOUR VALUE"
-AZURE_COSMOSDB_CONTAINER_NAME="ENTER YOUR VALUE"
-AZURE_COMOSDB_CONNECTION_STRING="ENTER YOUR VALUE" # Find this in the Keys section
+AZURE_SEARCH_ENDPOINT="https://cog-search-zfois4psbzsxa.search.windows.net"
+AZURE_SEARCH_KEY="QnvWJlR56EDVuAEajIfy2LJ6QgE7fQzPYOVCoBWvQUAzSeCZ16qB" # Make sure is the MANAGEMENT KEY no the query key
+COG_SERVICES_NAME="cognitive-service-zfois4psbzsxa"
+COG_SERVICES_KEY="f9c723c86da24bea806861c455ae45a1"
+FORM_RECOGNIZER_ENDPOINT="https://eastus2.api.cognitive.microsoft.com/" # Azure Document Intelligence API (former Form Recognizer)
+FORM_RECOGNIZER_KEY="d0864de66af64b5cac6d8fe6a1d8a97c"
+AZURE_OPENAI_ENDPOINT="https://av-openai-demo-sc.openai.azure.com/"
+AZURE_OPENAI_API_KEY="5c819ea3d6f04cf5a7351646f871c499"
+BING_SUBSCRIPTION_KEY="b9b37cfad1104c17bd64199f993c5872"
+SQL_SERVER_NAME="sql-server-zfois4psbzsxa.database.windows.net" # For Azure SQL, make sure it includes .database.windows.net at the end
+SQL_SERVER_DATABASE="SampleDB"
+SQL_SERVER_USERNAME="vykhand"
+SQL_SERVER_PASSWORD="Fuck1ngm0r0n"
+AZURE_COSMOSDB_ENDPOINT="https://cosmosdb-account-zfois4psbzsxa.documents.azure.com:443/"
+AZURE_COSMOSDB_NAME="cosmosdb-account-zfois4psbzsxa"
+AZURE_COSMOSDB_CONTAINER_NAME="cosmosdb-container-zfois4psbzsxa"
+AZURE_COMOSDB_CONNECTION_STRING="AccountEndpoint=https://cosmosdb-account-zfois4psbzsxa.documents.azure.com:443/;AccountKey=ATdRBduNS2zUC3EbdbYm7OeDF2bZtfP4dFWps6UaLjO0clOiBzreFZsGOa9hQ5zwDhZ60fvej20HACDbzBy9CA==;" # Find this in the Keys section