This year has been one to forget!
-But 2020 did have its bright spots, especially in the PSL community.
-This post reviews some of the highlights from the year.
-
-
The Library was able to welcome two new models to the catalog in 2020: microdf and OpenFisca-UK.
-microdf provides a number of useful tools for use with economic survey data.
-OpenFisca-UK builds off the OpenFisca platform, offering a microsimulation model for tax and benefit programs in the UK.
-
-
In addition, four new models were added to the Library as incubating projects. The ui-calculator model has received a lot of attention this year in the U.S., as it provides the capability to calculate unemployment insurance benefits across U.S. states, a major mode of delivering financial relief to individuals during the COVID crisis.
-PCI-Outbreak directly relates to the COVID crisis, using machine learning and natural language processing to estimate the true extent of the COVID pandemic in China.
-The model finds that actual COVID cases are significantly higher than what official statistics claim.
-The COVID-MCS model considers COVID case counts and test positivity rates to measure whether or not U.S. communities are meeting certain benchmarks in controlling the spread of the disease.
-On a lighter note, the Git-Tutorial project provides instruction and resources for learning to use Git and GitHub, with an emphasis on the workflow used by many projects in the PSL community.
-
-
The organization surrounding the Policy Simulation Library has been bolstered in two ways.
-First, we have formed a relationship with the Open Collective Foundation, who is now our fiscal host.
-This allows PSL to accept tax deductible contributions that will support the efforts of the community. Second, we’ve formed the PSL Foundation, with an initial board that includes Linda Gibbs, Glenn Hubbard, and Jason DeBacker.
-
-
Our outreach efforts have grown in 2020 to include the regular PSL Demo Day series and this PSL Blog.
-Community members have also presented work with PSL models at the PyData Global Conference, the Tax Economists Forum, AEI, the Coiled Podcast, and the Virtual Global Village Podcast.
-New users will also find a better experience learning how to use and contribute to PSL models as many PSL models have improved their documentation through the use of Jupyter Book (e.g., see the Tax-Calculator documentation).
As 2021 winds down, I wanted to take a few minutes to reflect on the Policy Simulation Library’s efforts over the past year.
-With an amazing community of contributors, supporters, and users, PSL has been able to make a real impact in 2021.
-
-
The library saw two new projects achieve “cataloged” status: Tax Foundation’s Capital Cost Recovery model and the Federal Reserve Bank of New York’s DSGE.jl model.
-Both models satisfy all the the PSL criteria for transparency and reproducibility.
-Both are also written entirely in open source software: the Capital Cost Recovery model is in R and the DSGE model in Julia.
-
-
An exciting new project to join the Library this year is PolicyEngine.
-PolicyEngine is building open source tax and benefit mircosimulation models and very user-friendly interfaces to these models.
-The goal of this project is to take policy analysis to the masses through intuitive web and mobile interfaces for policy models.
-The UK version of the PolicyEngine app has already seen use from politicians interested in reforming the tax and benefit system in the UK.
-
-
Another excellent new addition to the library is the Federal-State Tax Project.
-This project provides data imputation tools to allow for state tax data that are representative of each state as well as federal totals.
-These datasets can then be used in microsimulation models, such as Tax-Calculator to study the impact of federal tax laws across the states.
-Matt Jensen and Don Boyd have published several pieces with these tools, including in State Tax Notes
-
-
PSL Foundation became an official business entity in 2021.
-While still awaiting a letter of determination for 501(c)(3) status from the IRS, PSL Foundation was able to raise more than $25,000 in the last few months of 2021 to support open source policy analysis!
-
-
PSL community members continued to interact several times each week in our public calls.
-The PSL Shop was launched in 2021 so that anyone can get themselves some PSL swag (with some of each purchase going back to the PSL Foundation to support the Library).
-In addition, PSL hosted 20 Demo Day presentations from 11 different presenters!
-These short talks covered everything from new projects to interesting applications of some of the first projects to join the Library, as well as general open source tools.
-
-
As in past years, PSL cataloged and incubating models were found to be of great use in current policy debates.
-Whether it was the ARPA, Biden administration proposals to expand the CTC, or California’s Basic Income Bill, the accessibility and ability to reproduce results from these open source projects has made them a boon to policy analysts.
-
-
We are looking forward to a great 2022!
-We expect the Library to continue to grow, foresee many interesting and helpful Demo Days, and are planning a DC PSL Workshop for March 2022.
-We hope to see you around these or other events!
-
-
Best wishes from PSL for a happy and healthy New Year!
This has been another successful year for the Policy Simulation Library, whose great community of contributors continue to make innovative advances in open source policy analysis, and for the PSL Foundation, which supports the Library and its community.
-We are so thankful for all those who have made financial or technical contributions to the PSL this year!
-In this blog post, I want to take this time at the end of the year to reflect on a few of the highlights from 2022.
-
-
PolicyEngine, a PSL Foundation fiscally-sponsored project, launched PolicyEngine US in April and has since seen many use cases of the model (check out the PolicyEngine year-in-review here).
-PolicyEngine had begun by leveraging the OpenFisca platform, but has transitioned to their own-maintained PolicyEngine Core.
-PolicyEngine Core and their related projects (such as PolicyEngine US and PolicyEngine UK) already meet all the criteria set forth by the Policy Simulation Library.
-Keep an eye out for lots more excellent tax and benefit policy analysis tools from PolicyEngine in 2023 and beyond!
-
-
PSL Foundation has partnered with QuantEcon, acting as a fiscal sponsor for their projects that provide training and training materials for economic modeling and econometrics using open source tools.
-QuantEcon ran a massive open online class in India that had more than 1000 registrants in summer of 2022.
-They also ran an online course for over 100 students from universities in Africa in 2022.
-Further, with the funding received through their partnership with PSL Foundation, QuantEcon will continue these efforts in 2023 with a planned, in-person course in India.
-
-
PSL hosted its first in-person workshop in March.
-The workshop focused on open source tools for tax policy analysis including Tax-Calculator, Cost-of-Capital-Calculator, OG-USA, and PolicyEngine US.
-The PSL event was, appropriately enough, hosted at the MLK Memorial Library in DC.
-We filled the space with 30 attendees from think tanks, consultancies, and government agencies.
-The workshop was a great success and we look forward to hosting more in-person workshops in the future.
-
-
PSL’s bi-weekly Demo Day series continued throughout 2022, with 13 Demo Days this year.
-In these, we saw a wide array of presenters from institutions such as the Federal Reserve Bank of Atlanta, PolicyEngine, Tax Foundation, National Center for Children in Poverty, IZA Institute of Labor Economics, Channels, the University of South Carolina, the Center for Growth and Opportunity, and the American Enterprise Institute.
-You can go back and rewatch any of these presentations on YouTube.
-
-
It’s been a fantastic year and we expect even more from the community and PSL Foundation in 2023.
-PSL community members continue to interact several times each week on our public calls.
-Check out the events page and join us in the New Year!
-
-
From all of us at the PSL, best wishes for a happy and healthy New Year!
While there haven’t been any blog posts in 2023 , it has been a productive year for the Policy Simulation Library (PSL) community and PSL Foundation!
-
-
We’ve continued to serve our mission through education and outreach efforts.
-We hosted 13 Demo Days in 2023, including presentations from individuals at the Congressional Budget Office, Allegheny County, NOAA, Johns Hopkins, QuantEcon, the City of New York, and other institutions.
-Archived videos of the Demo Days are available on our YouTube Channel.
-
-
In addition, we hosted an in person workshop at the National Tax Association’s annual conference in November.
-This event featured the PolicyEngine-US project and was lead by Max Ghenis and Nikhil Woodruff, co-founders of PolicyEngine.
-Attendees included individuals from the local area (Denver) and conference attendees, who represented academia, government, and think tanks.
-Max and Nikhil provided an overview of PolicyEngine and then walked attendees through a hands-on exercise using the PolicyEngine US tool, having them write code to generate custom plots in a Google Colab notebook.
-It was a lot of fun – and the pizza was decent too!
-
-
Speaking of PolicyEngine, this fiscally-sponsored project of PSL Foundation had a banner year in terms of fundraising and development.
-The group received several grants in 2023 and closed out the year with a large grant from Arnold Ventures.
-They also wrote an NSF grant proposal which they are waiting to hear back from.
-The group added an experienced nonprofit executive, Leigh Gibson, to their team.
-Leigh provides support with fundraising and operations, and she’s been instrumental in these efforts.
-In terms of software development, the PolicyEngine team has been able to greatly leverage volunteers (more than 60!) with Pavel Makarchuk coming on as Policy Modeling Manager to help coordinate these efforts.
-With their community, PolicyEngine has codified numerous US state tax and benefit policies and has developed a robust method to create synthetic data for use in policy analysis.
-Be on the lookout for a lot more from them in 2024.
-
-
QuantEcon, another fiscally sponsored project, has also made tremendous contributions to open source economics in 2023.
-Most importantly, they ran a very successful summer school in West Africa.
-In addition, they have continued make key contributions to software tools useful for teaching and training economics tools.
-These include Jupyteach, which Spencer Lyon shared in our Demo Day series.
-With their online materials, textbooks, and workshops around the world, QuantEcon is shaping how researchers and policy analysts employ economic tools to solve real-world problems.
-
-
PSL Foundation added a third fiscally sponsored project, Policy Change Index (PCI) in 2023.
-PCI was founded by Weifeng Zhong, a Senior Research Fellow at the Mercatus Center at George Mason University, and uses natural language processing and machine learning to predict changes in policy among autocratic regimes.
-PCI has had a very successful start with PCI-China, predicting policy changes in China, and PCI-Outbreak, predicting the extent of true COVID-19 case counts in China during the pandemic.
-Currently, they are extending their work to include predictive indices for Russia, North Korea, and Iran.
-PSL-F is excited for the opportunity to help support this important work.
-
-
Other cataloged projects have continued to be widely used in 2023.
-To note a few of these use cases, the United Nations has partnered with Richard Evans and Jason DeBacker, maintainers of OG-Core, to help bring the modeling platform to developing countries they are assisting.
-Tax Foundation’s Capital Cost Recovery model has been updated to 2023 and used in their widely cited 2023 Tax Competitiveness Index.
-And the Tax-Calculator and TaxData projects both continue to used by think tanks and researchers.
-
-
As 2023 comes to a close, we look forward to 2024.
-We’ll be launching a new PSLmodels.org website soon.
-And there’ll be many more events – we hope you join in.
-
-
From all of us at the PSL, best wishes for a happy and healthy New Year!
The Policy Simulation Library is hosting a workshop in Washington, DC on March 25 on open source tools for the analysis of tax policy. Participants will learn how to use open source models from the Library for revenue estimation, distributional analysis, and to simulate economic impacts of tax policy. The workshop is intended to be a hands-on experience and participants can expect to leave with the required software, documentation, and knowledge to continue using these tools. All models in the workshop are written in the Python programming language–familiarity with the language is helpful, but not required.
The workshop will be held at the Martin Luther King Jr. Memorial Library in Washington, DC. Participants are expected to arrive by 8:30am and the program will conclude at 1:00pm. Breakfast and lunch will be provided. PSL Foundation is sponsoring the event and there is no cost to attend. Attendance is limited to 30 in order to make this a dynamic and interactive workshop.
-
-
To register, please use this Google Form. Registration will close March 11. Participants will be expected to bring a laptop to the workshop where they can interact with the software in real time with the instructors. Registered participants will receive an email before the event with a list of software to install before the workshop.
-
-
Please feel free to share this invitation with your colleagues.
-
\ No newline at end of file
diff --git a/LICENSE b/LICENSE
deleted file mode 100755
index 5c1d0d4..0000000
--- a/LICENSE
+++ /dev/null
@@ -1,201 +0,0 @@
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- APPENDIX: How to apply the Apache License to your work.
-
- To apply the Apache License to your work, attach the following
- boilerplate notice, with the fields enclosed by brackets "[]"
- replaced with your own identifying information. (Don't include
- the brackets!) The text should be enclosed in the appropriate
- comment syntax for the file format. We also recommend that a
- file or class name and description of purpose be included on the
- same "printed page" as the copyright notice for easier
- identification within third-party archives.
-
- Copyright 2020 onwards, fast.ai, Inc
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
diff --git a/PSL/PSL-Foundation/Workshop/2022/03/03/DC-workshop/index.html b/PSL/PSL-Foundation/Workshop/2022/03/03/DC-workshop/index.html
new file mode 100644
index 0000000..2ccdd0e
--- /dev/null
+++ b/PSL/PSL-Foundation/Workshop/2022/03/03/DC-workshop/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/README.md b/README.md
deleted file mode 100755
index c51bef5..0000000
--- a/README.md
+++ /dev/null
@@ -1,58 +0,0 @@
-[//]: # (This template replaces README.md when someone creates a new repo with the fastpages template.)
-
-![](https://github.com/PSLmodels/blog/workflows/CI/badge.svg)
-![](https://github.com/PSLmodels/blog/workflows/GH-Pages%20Status/badge.svg)
-[![](https://img.shields.io/static/v1?label=fastai&message=fastpages&color=57aeac&labelColor=black&style=flat&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABkAAAAjCAYAAABhCKGoAAAGMklEQVR42q1Xa0xTVxyfKExlui9blszoB12yDzPGzJhtyT5s+zBxUxELBQSHm2ZzU5epBF/LclXae29pCxR5VEGgLQUuIOKDuClhm8oUK7S9ve19tLTl/fA5p9MNc/Y/hRYEzGLxJL/87zk9Ob/zf5++NGHMALzYgdDYmWh0Qly3Lybtwi6lXdpN2cWN5A0+hrQKe5R2PoN2uD+OKcn/UF5ZsVduMmyXVRi+jzebdmI5/juhwrgj3mTI2GA0vvsUIcMwM7GkOD42t7Mf6bqHkFry2yk7X5PXcxMVDN5DGtFf9NkJfe6W5iaUyFShjfV1KPlk7VPAa0k11WjzL+eRvMJ4IKQO0dw8SydJL+Op0u5cn+3tQTn+fqTivTbQpiavF0iG7iGt6NevKjpKpTbUo3hj+QO47XB8hfHfIGAelA+T6mqQzFi+e0oTKm3iexQnXaU56ZrK5SlVsq70LMF7TuX0XNTyvi1rThzLST3TgOCgxwD0DPwDGoE07QkcSl/m5ynbHWmZVm6b0sp9o2DZN8aTZtqk9w9b2G2HLbbvsjlx+fry0vwU0OS5SH68Ylmilny3c3x9SOvpRuQN7hO8vqulZQ6WJMuXFAzcRfkDd5BG8B1bpc+nU0+fQtgkYLIngOEJwGt/J9UxCIJg1whJ05Ul4IMejbsLqUUfOjJKQnCDr4ySHMeO1/UMIa3UmR9TUpj7ZdMFJK8yo6RaZjLAF/JqM/rifCO+yP4AycGmlgUaT9cZ0OYP2um5prjBLhtvLhy68Fs7RFqbRvSlf15ybGdyLcPJmcpfIcIuT4nqqt+Sa2vaZaby1FB+JGi1c9INhuiv9fpIysItIh3CVgVAzXfEE1evzse/bwr8bolcAXs+zcqKXksQc5+FD2D/svT06I8IYtaUeZLZzsVm+3oRDmON1Ok/2NKyIJSs0xnj84RknXG6zgGEE1It+rsPtrYuDOxBKAJLrO1qnW7+OpqeNxF4HWv6v4Rql3uFRvL/DATnc/29x4lmy2t4fXVjY+ASGwylm8DBvkSm2gpgx1Bpg4hyyysqVoUuFRw0z8+jXe40yiFsp1lpC9navlJpE9JIh7RVwfJywmKZO4Hkh02NZ1FilfkJLi1B4GhLPduAZGazHO9LGDX/WAj7+npzwUQqvuOBoo1Va91dj3Tdgyinc0Dae+HyIrxvc2npbCxlxrJvcW3CeSKDMhKCoexRYnUlSqg0xU0iIS5dXwzm6c/x9iKKEx8q2lkV5RARJCcm9We2sgsZhGZmgMYjJOU7UhpOIqhRwwlmEwrBZHgCBRKkKX4ySVvbmzQnXoSDHWCyS6SV20Ha+VaSFTiSE8/ttVheDe4NarLxVB1kdE0fYAgjGaOWGYD1vxKrqmInkSBchRkmiuC4KILhonAo4+9gWVHYnElQMEsAxbRDSHtp7dq5CRWly2VlZe/EFRcvDcBQvBTPZeXly1JMpvlThzBBRASBoDsSBIpgOBQV6C+sUJzffwflQX8BTevCTZMZeoslUo9QJJZYTZDw3RuIKtIhlhXdfhDoJ7TTXY/XdBBpgUshwFMSRYTVwim7FJvt6aFyOnoVKqc7MZQDzzNwsmnd3UegCudl8R2qzHZ7bJbQoYGyn692+zMULCfXenoOacTOTBUnJYRFsq+5+a3sjp5BXM6hEz7ObHNoVEIHyocekiX6WIiykwWDd1HhzT8RzY2YqxnK0HNQBJtW500ddiwrDgdIeCABZ4MPnKQdk9xDhUP3wfHSqbBI9v/e9jo0Iy30cCOgAMyVgMMVCMwql/cQxfKp2R1dWWrRm0PzUkrIXC9ykDY+hnJ5DqkE709guriwSRgGzWTQCPABWJZ6vbNHQlgo099+CCEMPnF6xnwynYETEWd8ls0WPUpSWnTrfuAhAWacPslUiQRNLBGXFSA7TrL8V3gNhesTnLFY0jb+bYWVp0i7SClY184jVtcayi7so2yuA0r4npbjsV8CJHZhPQ7no323cJ5w8FqpLwR/YJNRnHs0hNGs6ZFw/Lpsb+9oj/dZSbuL0XUNojx4d9Gch5mOT0ImINsdKyHzT9Muz1lcXhRWbo9a8J3B72H8Lg6+bKb1hyWMPeERBXMGRxEBCM7Ddfh/1jDuWhb5+QkAAAAASUVORK5CYII=)](https://github.com/fastai/fastpages)
-
-
-# [PSL blog](https://blog.pslmodels.org)
-
-
-_powered by [fastpages](https://github.com/fastai/fastpages)_
-
-
-## How to contribute
-
-The `PSL blog` accepts posts as Jupyter notebooks or Markdown documents:
-
-* To submit a Jupyter post, create a PR with your `YYYY-MM-DD-post-title.ipynb` file in the `_notebooks` folder.
-* To submit a Markdown post, create a PR with your `YYYY-MM-DD-post-title.md` file in the `_posts` folder.
-
-In both cases, please take the following steps:
-1. Follow the Fastpages guides on writing blogs with [Jupyter](https://github.com/fastai/fastpages#writing-blog-posts-with-jupyter) and [Markdown](https://github.com/fastai/fastpages#writing-blog-posts-with-markdown), and include the required "front matter" with metadata.
-2. [Preview the blog locally](_fastpages_docs/DEVELOPMENT.md) using the `make server` command from the root directory after drafting your post.
-You'll need [Docker](https://docs.docker.com/get-docker/) installed on your machine to run the command.
-3. Include a screenshot of the blog post in your PR.
-
-Please use the [nbdev & blogging channel](https://forums.fast.ai/c/fastai-users/nbdev/48) in the fastai forums for any questions or feature requests.
-
-## Technical guide
-
-* Make URLs concise but descriptive of the post.
-* Use lowercase and dashes for all tags.
-* Make each new sentence its own line (for easier code review).
-
-## Post formats
-
-Please follow the below guidance for each post type.
-
-### Demo Days recaps
-
-* Tag the post as `demo-days`.
-* Title the post `Demo Day: {title}`.
-* Show the video at the top of the post.
-* Describe the motivation for the demo and what you demonstrated.
-* Link materials shown in the demo, including code and/or Compute Studio simulations.
-
-See [PR #10](https://github.com/PSLmodels/blog/pull/10) as an example.
-
-## Style guide
-
-* Focus on PSL software and models; reserve policy analysis for think tank and op-ed reports.
-* Write with a neutral tone.
-* Favor active over passive voice.
-* Capitalize post titles.
-* Favor `Mmm DD`, `Mmm DD, YYYY`, or `YYYY-MM-DD` date formats over `MM/DD` and `MM/DD/YYYY`, which can be misinterpreted outside the US.
-* Avoid "click here" links; for better [accessibility](https://webaccess.berkeley.edu/resources/tips/web-accessibility#accessible-links) and [SEO](https://www.lamar.edu/web-communication/resources/avoid-using-click-here.html), provide descriptive links.
-
-*Questions? Contact Max Ghenis (mghenis@gmail.com).*
diff --git a/about.html b/about.html
new file mode 100644
index 0000000..a91c295
--- /dev/null
+++ b/about.html
@@ -0,0 +1,555 @@
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - PSL Blog
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
The Policy Simulation Library (PSL) is a collection of models and other software for public-policy decisionmaking.
-PSL is developed by independent projects that meet standards for transparency and accessibility.
-The PSL community encourages collaborative contribution and makes the tools it develops accessible to a diverse group of users.1
-
-
-
-
-
This website is powered by fastpages, a blogging platform that natively supports Jupyter notebooks in addition to other formats. ↩
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/announcements/2020/11/06/introducing-psl-blog/index.html b/announcements/2020/11/06/introducing-psl-blog/index.html
new file mode 100644
index 0000000..145d389
--- /dev/null
+++ b/announcements/2020/11/06/introducing-psl-blog/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/assets/badges/binder.svg b/assets/badges/binder.svg
deleted file mode 100755
index 327f6b6..0000000
--- a/assets/badges/binder.svg
+++ /dev/null
@@ -1 +0,0 @@
-
\ No newline at end of file
diff --git a/assets/badges/colab.svg b/assets/badges/colab.svg
deleted file mode 100755
index c08066e..0000000
--- a/assets/badges/colab.svg
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/assets/badges/github.svg b/assets/badges/github.svg
deleted file mode 100755
index e02d8ed..0000000
--- a/assets/badges/github.svg
+++ /dev/null
@@ -1 +0,0 @@
-View On GitHubView On GitHub
\ No newline at end of file
diff --git a/assets/css/style.css b/assets/css/style.css
deleted file mode 100755
index 33b965f..0000000
--- a/assets/css/style.css
+++ /dev/null
@@ -1,479 +0,0 @@
-.highlight .c { color: #998; font-style: italic; }
-
-.highlight .err { color: #a61717; background-color: #e3d2d2; }
-
-.highlight .k { font-weight: bold; }
-
-.highlight .o { font-weight: bold; }
-
-.highlight .cm { color: #998; font-style: italic; }
-
-.highlight .cp { color: #999; font-weight: bold; }
-
-.highlight .c1 { color: #998; font-style: italic; }
-
-.highlight .cs { color: #999; font-weight: bold; font-style: italic; }
-
-.highlight .gd { color: #000; background-color: #fdd; }
-
-.highlight .gd .x { color: #000; background-color: #faa; }
-
-.highlight .ge { font-style: italic; }
-
-.highlight .gr { color: #a00; }
-
-.highlight .gh { color: #999; }
-
-.highlight .gi { color: #000; background-color: #dfd; }
-
-.highlight .gi .x { color: #000; background-color: #afa; }
-
-.highlight .go { color: #888; }
-
-.highlight .gp { color: #555; }
-
-.highlight .gs { font-weight: bold; }
-
-.highlight .gu { color: #aaa; }
-
-.highlight .gt { color: #a00; }
-
-.highlight .kc { font-weight: bold; }
-
-.highlight .kd { font-weight: bold; }
-
-.highlight .kp { font-weight: bold; }
-
-.highlight .kr { font-weight: bold; }
-
-.highlight .kt { color: #458; font-weight: bold; }
-
-.highlight .m { color: #099; }
-
-.highlight .s { color: #d14; }
-
-.highlight .na { color: #008080; }
-
-.highlight .nb { color: #0086B3; }
-
-.highlight .nc { color: #458; font-weight: bold; }
-
-.highlight .no { color: #008080; }
-
-.highlight .ni { color: #800080; }
-
-.highlight .ne { color: #900; font-weight: bold; }
-
-.highlight .nf { color: #900; font-weight: bold; }
-
-.highlight .nn { color: #555; }
-
-.highlight .nt { color: #000080; }
-
-.highlight .nv { color: #008080; }
-
-.highlight .ow { font-weight: bold; }
-
-.highlight .w { color: #bbb; }
-
-.highlight .mf { color: #099; }
-
-.highlight .mh { color: #099; }
-
-.highlight .mi { color: #099; }
-
-.highlight .mo { color: #099; }
-
-.highlight .sb { color: #d14; }
-
-.highlight .sc { color: #d14; }
-
-.highlight .sd { color: #d14; }
-
-.highlight .s2 { color: #d14; }
-
-.highlight .se { color: #d14; }
-
-.highlight .sh { color: #d14; }
-
-.highlight .si { color: #d14; }
-
-.highlight .sx { color: #d14; }
-
-.highlight .sr { color: #009926; }
-
-.highlight .s1 { color: #d14; }
-
-.highlight .ss { color: #990073; }
-
-.highlight .bp { color: #999; }
-
-.highlight .vc { color: #008080; }
-
-.highlight .vg { color: #008080; }
-
-.highlight .vi { color: #008080; }
-
-.highlight .il { color: #099; }
-
-html { font-size: 16px; }
-
-/** Reset some basic elements */
-body, h1, h2, h3, h4, h5, h6, p, blockquote, pre, hr, dl, dd, ol, ul, figure { margin: 0; padding: 0; }
-
-/** Basic styling */
-body { font: 400 16px/1.5 -apple-system, system-ui, BlinkMacSystemFont, "Segoe UI", "Segoe UI Emoji", "Segoe UI Symbol", "Apple Color Emoji", Roboto, Helvetica, Arial, sans-serif; color: #111111; background-color: #fdfdfd; -webkit-text-size-adjust: 100%; -webkit-font-feature-settings: "kern" 1; -moz-font-feature-settings: "kern" 1; -o-font-feature-settings: "kern" 1; font-feature-settings: "kern" 1; font-kerning: normal; display: flex; min-height: 100vh; flex-direction: column; overflow-wrap: break-word; }
-
-/** Set `margin-bottom` to maintain vertical rhythm */
-h1, h2, h3, h4, h5, h6, p, blockquote, pre, ul, ol, dl, figure, .highlight { margin-bottom: 15px; }
-
-hr { margin-top: 30px; margin-bottom: 30px; }
-
-/** `main` element */
-main { display: block; /* Default value of `display` of `main` element is 'inline' in IE 11. */ }
-
-/** Images */
-img { max-width: 100%; vertical-align: middle; }
-
-/** Figures */
-figure > img { display: block; }
-
-figcaption { font-size: 14px; }
-
-/** Lists */
-ul, ol { margin-left: 30px; }
-
-li > ul, li > ol { margin-bottom: 0; }
-
-/** Headings */
-h1, h2, h3, h4, h5, h6 { font-weight: 400; }
-
-/** Links */
-a { color: #2a7ae2; text-decoration: none; }
-
-a:visited { color: #1756a9; }
-
-a:hover { color: #111111; text-decoration: underline; }
-
-.social-media-list a:hover, .pagination a:hover { text-decoration: none; }
-
-.social-media-list a:hover .username, .pagination a:hover .username { text-decoration: underline; }
-
-/** Blockquotes */
-blockquote { color: #828282; border-left: 4px solid #e8e8e8; padding-left: 15px; font-size: 1.125rem; font-style: italic; }
-
-blockquote > :last-child { margin-bottom: 0; }
-
-blockquote i, blockquote em { font-style: normal; }
-
-/** Code formatting */
-pre, code { font-family: "Menlo", "Inconsolata", "Consolas", "Roboto Mono", "Ubuntu Mono", "Liberation Mono", "Courier New", monospace; font-size: 0.9375em; border: 1px solid #e8e8e8; border-radius: 3px; background-color: #eeeeff; }
-
-code { padding: 1px 5px; }
-
-pre { padding: 8px 12px; overflow-x: auto; }
-
-pre > code { border: 0; padding-right: 0; padding-left: 0; }
-
-.highlight { border-radius: 3px; background: #eeeeff; }
-
-.highlighter-rouge .highlight { background: #eeeeff; }
-
-/** Wrapper */
-.wrapper { max-width: calc(1000px - (30px)); margin-right: auto; margin-left: auto; padding-right: 15px; padding-left: 15px; }
-
-@media screen and (min-width: 1200px) { .wrapper { max-width: calc(1000px - (30px * 2)); padding-right: 30px; padding-left: 30px; } }
-
-/** Clearfix */
-.wrapper:after { content: ""; display: table; clear: both; }
-
-/** Icons */
-.orange { color: #f66a0a; }
-
-.grey { color: #828282; }
-
-.svg-icon { width: 1.25em; height: 1.25em; display: inline-block; fill: currentColor; vertical-align: text-bottom; }
-
-/** Tables */
-table { margin-bottom: 30px; width: 100%; text-align: left; color: #3f3f3f; border-collapse: collapse; border: 1px solid #e8e8e8; }
-
-table tr:nth-child(even) { background-color: #f7f7f7; }
-
-table th, table td { padding: 10px 15px; }
-
-table th { background-color: #f0f0f0; border: 1px solid #e0e0e0; }
-
-table td { border: 1px solid #e8e8e8; }
-
-@media screen and (max-width: 1000px) { table { display: block; overflow-x: auto; -webkit-overflow-scrolling: touch; -ms-overflow-style: -ms-autohiding-scrollbar; } }
-
-/** Site header */
-.site-header { border-top: 5px solid #424242; border-bottom: 1px solid #e8e8e8; min-height: 55.95px; line-height: 54px; position: relative; }
-
-.site-title { font-size: 1.625rem; font-weight: 300; letter-spacing: -1px; margin-bottom: 0; float: left; }
-
-@media screen and (max-width: 800px) { .site-title { padding-right: 45px; } }
-
-.site-title, .site-title:visited { color: #424242; }
-
-.site-nav { position: absolute; top: 9px; right: 15px; background-color: #fdfdfd; border: 1px solid #e8e8e8; border-radius: 5px; text-align: right; }
-
-.site-nav .nav-trigger { display: none; }
-
-.site-nav .menu-icon { float: right; width: 36px; height: 26px; line-height: 0; padding-top: 10px; text-align: center; }
-
-.site-nav .menu-icon > svg path { fill: #424242; }
-
-.site-nav label[for="nav-trigger"] { display: block; float: right; width: 36px; height: 36px; z-index: 2; cursor: pointer; }
-
-.site-nav input ~ .trigger { clear: both; display: none; }
-
-.site-nav input:checked ~ .trigger { display: block; padding-bottom: 5px; }
-
-.site-nav .page-link { color: #111111; line-height: 1.5; display: block; padding: 5px 10px; margin-left: 20px; }
-
-.site-nav .page-link:not(:last-child) { margin-right: 0; }
-
-@media screen and (min-width: 1000px) { .site-nav { position: static; float: right; border: none; background-color: inherit; } .site-nav label[for="nav-trigger"] { display: none; } .site-nav .menu-icon { display: none; } .site-nav input ~ .trigger { display: block; } .site-nav .page-link { display: inline; padding: 0; margin-left: auto; } .site-nav .page-link:not(:last-child) { margin-right: 20px; } }
-
-/** Site footer */
-.site-footer { border-top: 1px solid #e8e8e8; padding: 30px 0; }
-
-.footer-heading { font-size: 1.125rem; margin-bottom: 15px; }
-
-.feed-subscribe .svg-icon { padding: 5px 5px 2px 0; }
-
-.contact-list, .social-media-list, .pagination { list-style: none; margin-left: 0; }
-
-.footer-col-wrapper, .social-links { font-size: 0.9375rem; color: #828282; }
-
-.footer-col { margin-bottom: 15px; }
-
-.footer-col-1, .footer-col-2 { width: calc(50% - (30px / 2)); }
-
-.footer-col-3 { width: calc(100% - (30px / 2)); }
-
-@media screen and (min-width: 1200px) { .footer-col-1 { width: calc(35% - (30px / 2)); } .footer-col-2 { width: calc(20% - (30px / 2)); } .footer-col-3 { width: calc(45% - (30px / 2)); } }
-
-@media screen and (min-width: 1000px) { .footer-col-wrapper { display: flex; } .footer-col { width: calc(100% - (30px / 2)); padding: 0 15px; } .footer-col:first-child { padding-right: 15px; padding-left: 0; } .footer-col:last-child { padding-right: 0; padding-left: 15px; } }
-
-/** Page content */
-.page-content { padding: 30px 0; flex: 1 0 auto; }
-
-.page-heading { font-size: 2rem; }
-
-.post-list-heading { font-size: 1.75rem; }
-
-.post-list { margin-left: 0; list-style: none; }
-
-.post-list > li { margin-bottom: 30px; }
-
-.post-meta { font-size: 14px; color: #828282; }
-
-.post-link { display: block; font-size: 1.5rem; }
-
-/** Posts */
-.post-header { margin-bottom: 30px; }
-
-.post-title, .post-content h1 { font-size: 2.625rem; letter-spacing: -1px; line-height: 1.15; }
-
-@media screen and (min-width: 1200px) { .post-title, .post-content h1 { font-size: 2.625rem; } }
-
-.post-content { margin-bottom: 30px; }
-
-.post-content h1, .post-content h2, .post-content h3, .post-content h4, .post-content h5, .post-content h6 { margin-top: 30px; }
-
-.post-content h2 { font-size: 1.75rem; }
-
-@media screen and (min-width: 1200px) { .post-content h2 { font-size: 2rem; } }
-
-.post-content h3 { font-size: 1.375rem; }
-
-@media screen and (min-width: 1200px) { .post-content h3 { font-size: 1.625rem; } }
-
-.post-content h4 { font-size: 1.25rem; }
-
-.post-content h5 { font-size: 1.125rem; }
-
-.post-content h6 { font-size: 1.0625rem; }
-
-.social-media-list, .pagination { display: table; margin: 0 auto; }
-
-.social-media-list li, .pagination li { float: left; margin: 5px 10px 5px 0; }
-
-.social-media-list li:last-of-type, .pagination li:last-of-type { margin-right: 0; }
-
-.social-media-list li a, .pagination li a { display: block; padding: 10px 12px; border: 1px solid #e8e8e8; }
-
-.social-media-list li a:hover, .pagination li a:hover { border-color: #dbdbdb; }
-
-/** Pagination navbar */
-.pagination { margin-bottom: 30px; }
-
-.pagination li a, .pagination li div { min-width: 41px; text-align: center; box-sizing: border-box; }
-
-.pagination li div { display: block; padding: 7.5px; border: 1px solid transparent; }
-
-.pagination li div.pager-edge { color: #e8e8e8; border: 1px dashed; }
-
-/** Grid helpers */
-@media screen and (min-width: 1200px) { .one-half { width: calc(50% - (30px / 2)); } }
-
-/*-----------------------------------*/
-/*--- IMPORT STYLES FOR FASTPAGES ---*/
-.post img { display: block; vertical-align: top; margin-left: auto; margin-right: auto; }
-
-img.emoji { display: inline !important; vertical-align: baseline !important; }
-
-.post figcaption { text-align: center; font-size: .8rem; font-style: italic; color: light-grey; }
-
-.page-content { -webkit-font-smoothing: antialiased !important; text-rendering: optimizeLegibility !important; font-family: "Segoe UI", SegoeUI, Roboto, "Segoe WP", "Helvetica Neue", "Helvetica", "Tahoma", "Arial", sans-serif !important; }
-
-.post-content p, .post-content li { font-size: 20px; color: #515151; }
-
-.post-link { font-weight: normal; }
-
-h1 { margin-top: 2.5rem !important; }
-
-h2 { margin-top: 2rem !important; }
-
-h3, h4 { margin-top: 1.5rem !important; }
-
-p { margin-top: 1rem !important; margin-bottom: 1rem !important; }
-
-h1, h2, h3, h4 { font-weight: normal !important; margin-bottom: 0.5rem !important; }
-
-h1 code, h2 code, h3 code, h4 code { font-size: 100%; }
-
-pre { margin-bottom: 1.5rem !important; }
-
-.post-title { margin-top: .5rem !important; }
-
-li h3, li h4 { margin-top: .05rem !important; margin-bottom: .05rem !important; }
-
-li .post-meta-description { color: #585858; font-size: 15px; margin-top: .05rem !important; margin-bottom: .05rem !important; }
-
-details.description[open] summary::after { content: attr(data-open); }
-
-details.description:not([open]) summary::after { content: attr(data-close); }
-
-.notebook-badge-image { border: 0 !important; }
-
-.footnotes { font-size: 12px !important; }
-
-.footnotes p, .footnotes li { font-size: 12px !important; }
-
-.social-media-list .svg-icon, .pagination .svg-icon { width: 25px !important; height: 23px !important; }
-
-.anchor-link { opacity: 0; padding-left: 0.375em; \-webkit-text-stroke: 1.75px white; \-webkit-transition: opacity 0.2s ease-in-out 0.1s; \-moz-transition: opacity 0.2s ease-in-out 0.1s; \-ms-transition: opacity 0.2s ease-in-out 0.1s; }
-
-h1:hover .anchor-link, h2:hover .anchor-link, h3:hover .anchor-link, h4:hover .anchor-link, h5:hover .anchor-link, h6:hover .anchor-link { opacity: 1; }
-
-.category-tags { margin-top: .25rem !important; margin-bottom: .25rem !important; font-size: 105%; }
-
-.post-meta-title, .post-meta { margin-top: .25em !important; margin-bottom: .25em !important; font-size: 105%; }
-
-.page-description { margin-top: .5rem !important; margin-bottom: .5rem !important; color: #585858; font-size: 115%; }
-
-.category-tags-icon { font-size: 75% !important; padding-left: 0.375em; opacity: 35%; }
-
-.category-tags-link { color: #bb8181 !important; font-size: 13px !important; }
-
-.js-search-results { padding-top: 0.2rem; }
-
-.search-results-list-item { padding-bottom: 1rem; }
-
-.search-results-list-item .search-result-title { font-size: 16px; color: #d9230f; }
-
-.search-result-rel-url { color: silver; }
-
-.search-results-list-item a { display: block; color: #777; }
-
-.search-results-list-item a:hover, .search-results-list-item a:focus { text-decoration: none; }
-
-.search-results-list-item a:hover .search-result-title { text-decoration: underline; }
-
-.search-result-rel-date { color: #6d788a; font-size: 14px; }
-
-.search-result-preview { color: #777; font-size: 16px; margin-top: .02rem !important; margin-bottom: .02rem !important; }
-
-.search-result-highlight { color: #2e0137; font-weight: bold; }
-
-table { white-space: normal; max-width: 100%; font-size: 100%; border: none; }
-
-table th { text-align: center !important; }
-
-::-webkit-scrollbar { width: 14px; height: 18px; }
-
-::-webkit-scrollbar-thumb { height: 6px; border: 4px solid rgba(0, 0, 0, 0); background-clip: padding-box; -webkit-border-radius: 7px; background-color: #9D9D9D; -webkit-box-shadow: inset -1px -1px 0px rgba(0, 0, 0, 0.05), inset 1px 1px 0px rgba(0, 0, 0, 0.05); }
-
-::-webkit-scrollbar-button { width: 0; height: 0; display: none; }
-
-::-webkit-scrollbar-corner { background-color: transparent; }
-
-.output_text.output_execute_result pre { white-space: pre-wrap; }
-
-.svg-icon.orange { width: 30px; height: 23px; }
-
-.code_cell { margin: 1.5rem 0px !important; }
-
-pre code { font-size: 15px !important; }
-
-/*-----------------------------------*/
-/*----- ADD YOUR STYLES BELOW -------*/
-.language-python + .language-plaintext { border-left: 1px solid grey; margin-left: 1rem !important; }
-
-[class^="language-"]:not(.language-plaintext) pre, [class^="language-"]:not(.language-plaintext) code { background-color: #323443 !important; color: #f8f8f2; }
-
-.language-python + .language-plaintext code { background-color: white !important; }
-
-.language-python + .language-plaintext pre { background-color: white !important; }
-
-.input_area pre, .input_area div { margin-bottom: 1.0rem !important; margin-top: 1.5rem !important; padding-bottom: 0 !important; padding-top: 0 !important; background: #323443 !important; -webkit-font-smoothing: antialiased; text-rendering: optimizeLegibility; font-family: Menlo, Monaco, Consolas, "Lucida Console", Roboto, Ubuntu, monospace; border-radius: 5px; font-size: 105%; }
-
-.output_area pre, .output_area div { margin-bottom: 1rem !important; margin-top: 0rem !important; padding-bottom: 0 !important; padding-top: 0 !important; }
-
-.input_area pre { border-left: 1px solid lightcoral; }
-
-.output_area pre { border-left: 1px solid grey; margin-left: 1rem !important; font-size: 16px; }
-
-.code_cell table { width: auto; }
-
-/* Dracula Theme v1.2.5 https://github.com/zenorocha/dracula-theme Copyright 2016, All rights reserved Code licensed under the MIT license */
-.highlight { background: #323443 !important; color: #f8f8f2 !important; }
-
-.highlight pre, .highlight code { background: #323443; color: #f8f8f2; font-size: 110%; }
-
-.highlight .hll, .highlight .s, .highlight .sa, .highlight .sb, .highlight .sc, .highlight .dl, .highlight .sd, .highlight .s2, .highlight .se, .highlight .sh, .highlight .si, .highlight .sx, .highlight .sr, .highlight .s1, .highlight .ss { color: #e7997a; }
-
-.highlight .go { color: #44475a; }
-
-.highlight .err, .highlight .g, .highlight .l, .highlight .n, .highlight .x, .highlight .ge, .highlight .gr, .highlight .gh, .highlight .gi, .highlight .gp, .highlight .gs, .highlight .gu, .highlight .gt, .highlight .ld, .highlight .no, .highlight .nd, .highlight .pi, .highlight .ni, .highlight .ne, .highlight .nn, .highlight .nx, .highlight .py, .highlight .w, .highlight .bp { color: #f8f8f2; background-color: #323443 !important; }
-
-.highlight .p { font-weight: bold; color: #66d9ef; }
-
-.highlight .ge { text-decoration: underline; }
-
-.highlight .bp { font-style: italic; }
-
-.highlight .c, .highlight .ch, .highlight .cm, .highlight .cpf, .highlight .cs { color: #6272a4; }
-
-.highlight .c1 { color: gray; }
-
-.highlight .kd, .highlight .kt, .highlight .nb, .highlight .nl, .highlight .nv, .highlight .vc, .highlight .vg, .highlight .vi, .highlight .vm { color: #8be9fd; }
-
-.highlight .kd, .highlight .nb, .highlight .nl, .highlight .nv, .highlight .vc, .highlight .vg, .highlight .vi, .highlight .vm { font-style: italic; }
-
-.highlight .fm, .highlight .na, .highlight .nc, .highlight .nf { color: #ace591; }
-
-.highlight .k, .highlight .o, .highlight .cp, .highlight .kc, .highlight .kn, .highlight .kp, .highlight .kr, .highlight .nt, .highlight .ow { color: #ff79c6; }
-
-.highlight .kc { color: #ace591; }
-
-.highlight .m, .highlight .mb, .highlight .mf, .highlight .mh, .highlight .mi, .highlight .mo, .highlight .il { color: #bd93f9; }
-
-.highlight .gd { color: #ff5555; }
-
-p code { font-size: 19px; }
-
-/*# sourceMappingURL=style.css.map */
\ No newline at end of file
diff --git a/assets/css/style.css.map b/assets/css/style.css.map
deleted file mode 100755
index 5a76d7b..0000000
--- a/assets/css/style.css.map
+++ /dev/null
@@ -1,30 +0,0 @@
-{
- "version": 3,
- "file": "style.css",
- "sources": [
- "style.scss",
- "../../tmp/jekyll-remote-theme-20231229-1-7f1xdb/_sass/minima/skins/classic.scss",
- "../../tmp/jekyll-remote-theme-20231229-1-7f1xdb/_sass/minima/skins/auto.scss",
- "../../tmp/jekyll-remote-theme-20231229-1-7f1xdb/_sass/minima/initialize.scss",
- "_sass/minima/custom-variables.scss",
- "../../tmp/jekyll-remote-theme-20231229-1-7f1xdb/_sass/minima/_base.scss",
- "../../tmp/jekyll-remote-theme-20231229-1-7f1xdb/_sass/minima/_layout.scss",
- "_sass/minima/custom-styles.scss",
- "_sass/minima/fastpages-styles.scss",
- "_sass/minima/fastpages-dracula-highlight.scss"
- ],
- "sourcesContent": [
- "@import\n \"minima/skins/classic\",\n \"minima/initialize\";\n",
- "@charset \"utf-8\";\n\n$color-scheme-auto: false;\n$color-scheme-dark: false;\n@import \"minima/skins/auto\";\n",
- "@charset \"utf-8\";\n\n// Default color scheme settings\n// These are overridden in classic.css and dark.scss\n\n$color-scheme-auto: true !default;\n$color-scheme-dark: false !default;\n\n\n// Light mode\n// ----------\n\n$lm-brand-color: #828282 !default;\n$lm-brand-color-light: lighten($lm-brand-color, 40%) !default;\n$lm-brand-color-dark: darken($lm-brand-color, 25%) !default;\n\n$lm-site-title-color: $lm-brand-color-dark !default;\n\n$lm-text-color: #111111 !default;\n$lm-background-color: #fdfdfd !default;\n$lm-code-background-color: #eeeeff !default;\n\n$lm-link-base-color: #2a7ae2 !default;\n$lm-link-visited-color: darken($lm-link-base-color, 15%) !default;\n$lm-link-hover-color: $lm-text-color !default;\n\n$lm-border-color-01: $lm-brand-color-light !default;\n$lm-border-color-02: lighten($lm-brand-color, 35%) !default;\n$lm-border-color-03: $lm-brand-color-dark !default;\n\n$lm-table-text-color: lighten($lm-text-color, 18%) !default;\n$lm-table-zebra-color: lighten($lm-brand-color, 46%) !default;\n$lm-table-header-bg-color: lighten($lm-brand-color, 43%) !default;\n$lm-table-header-border: lighten($lm-brand-color, 37%) !default;\n$lm-table-border-color: $lm-border-color-01 !default;\n\n\n// Syntax highlighting styles should be adjusted appropriately for every \"skin\"\n// ----------------------------------------------------------------------------\n\n@mixin lm-highlight {\n .highlight {\n .c { color: #998; font-style: italic } // Comment\n .err { color: #a61717; background-color: #e3d2d2 } // Error\n .k { font-weight: bold } // Keyword\n .o { font-weight: bold } // Operator\n .cm { color: #998; font-style: italic } // Comment.Multiline\n .cp { color: #999; font-weight: bold } // Comment.Preproc\n .c1 { color: #998; font-style: italic } // Comment.Single\n .cs { color: #999; font-weight: bold; font-style: italic } // Comment.Special\n .gd { color: #000; background-color: #fdd } // Generic.Deleted\n .gd .x { color: #000; background-color: #faa } // Generic.Deleted.Specific\n .ge { font-style: italic } // Generic.Emph\n .gr { color: #a00 } // Generic.Error\n .gh { color: #999 } // Generic.Heading\n .gi { color: #000; background-color: #dfd } // Generic.Inserted\n .gi .x { color: #000; background-color: #afa } // Generic.Inserted.Specific\n .go { color: #888 } // Generic.Output\n .gp { color: #555 } // Generic.Prompt\n .gs { font-weight: bold } // Generic.Strong\n .gu { color: #aaa } // Generic.Subheading\n .gt { color: #a00 } // Generic.Traceback\n .kc { font-weight: bold } // Keyword.Constant\n .kd { font-weight: bold } // Keyword.Declaration\n .kp { font-weight: bold } // Keyword.Pseudo\n .kr { font-weight: bold } // Keyword.Reserved\n .kt { color: #458; font-weight: bold } // Keyword.Type\n .m { color: #099 } // Literal.Number\n .s { color: #d14 } // Literal.String\n .na { color: #008080 } // Name.Attribute\n .nb { color: #0086B3 } // Name.Builtin\n .nc { color: #458; font-weight: bold } // Name.Class\n .no { color: #008080 } // Name.Constant\n .ni { color: #800080 } // Name.Entity\n .ne { color: #900; font-weight: bold } // Name.Exception\n .nf { color: #900; font-weight: bold } // Name.Function\n .nn { color: #555 } // Name.Namespace\n .nt { color: #000080 } // Name.Tag\n .nv { color: #008080 } // Name.Variable\n .ow { font-weight: bold } // Operator.Word\n .w { color: #bbb } // Text.Whitespace\n .mf { color: #099 } // Literal.Number.Float\n .mh { color: #099 } // Literal.Number.Hex\n .mi { color: #099 } // Literal.Number.Integer\n .mo { color: #099 } // Literal.Number.Oct\n .sb { color: #d14 } // Literal.String.Backtick\n .sc { color: #d14 } // Literal.String.Char\n .sd { color: #d14 } // Literal.String.Doc\n .s2 { color: #d14 } // Literal.String.Double\n .se { color: #d14 } // Literal.String.Escape\n .sh { color: #d14 } // Literal.String.Heredoc\n .si { color: #d14 } // Literal.String.Interpol\n .sx { color: #d14 } // Literal.String.Other\n .sr { color: #009926 } // Literal.String.Regex\n .s1 { color: #d14 } // Literal.String.Single\n .ss { color: #990073 } // Literal.String.Symbol\n .bp { color: #999 } // Name.Builtin.Pseudo\n .vc { color: #008080 } // Name.Variable.Class\n .vg { color: #008080 } // Name.Variable.Global\n .vi { color: #008080 } // Name.Variable.Instance\n .il { color: #099 } // Literal.Number.Integer.Long\n }\n}\n\n\n// Dark mode\n// ---------\n\n$dm-brand-color: #999999 !default;\n$dm-brand-color-light: lighten($dm-brand-color, 5%) !default;\n$dm-brand-color-dark: darken($dm-brand-color, 35%) !default;\n\n$dm-site-title-color: $dm-brand-color-light !default;\n\n$dm-text-color: #bbbbbb !default;\n$dm-background-color: #181818 !default;\n$dm-code-background-color: #212121 !default;\n\n$dm-link-base-color: #79b8ff !default;\n$dm-link-visited-color: $dm-link-base-color !default;\n$dm-link-hover-color: $dm-text-color !default;\n\n$dm-border-color-01: $dm-brand-color-dark !default;\n$dm-border-color-02: $dm-brand-color-light !default;\n$dm-border-color-03: $dm-brand-color !default;\n\n$dm-table-text-color: $dm-text-color !default;\n$dm-table-zebra-color: lighten($dm-background-color, 4%) !default;\n$dm-table-header-bg-color: lighten($dm-background-color, 10%) !default;\n$dm-table-header-border: lighten($dm-background-color, 21%) !default;\n$dm-table-border-color: $dm-border-color-01 !default;\n\n\n// Syntax highlighting styles should be adjusted appropriately for every \"skin\"\n// List of tokens: https://github.com/rouge-ruby/rouge/wiki/List-of-tokens\n// Some colors come from Material Theme Darker:\n// https://github.com/material-theme/vsc-material-theme/blob/master/scripts/generator/settings/specific/darker-hc.ts\n// https://github.com/material-theme/vsc-material-theme/blob/master/scripts/generator/color-set.ts\n// ----------------------------------------------------------------------------\n\n@mixin dm-highlight {\n .highlight {\n .c { color: #545454; font-style: italic } // Comment\n .err { color: #f07178; background-color: #e3d2d2 } // Error\n .k { color: #89DDFF; font-weight: bold } // Keyword\n .o { font-weight: bold } // Operator\n .cm { color: #545454; font-style: italic } // Comment.Multiline\n .cp { color: #545454; font-weight: bold } // Comment.Preproc\n .c1 { color: #545454; font-style: italic } // Comment.Single\n .cs { color: #545454; font-weight: bold; font-style: italic } // Comment.Special\n .gd { color: #000; background-color: #fdd } // Generic.Deleted\n .gd .x { color: #000; background-color: #faa } // Generic.Deleted.Specific\n .ge { font-style: italic } // Generic.Emph\n .gr { color: #f07178 } // Generic.Error\n .gh { color: #999 } // Generic.Heading\n .gi { color: #000; background-color: #dfd } // Generic.Inserted\n .gi .x { color: #000; background-color: #afa } // Generic.Inserted.Specific\n .go { color: #888 } // Generic.Output\n .gp { color: #555 } // Generic.Prompt\n .gs { font-weight: bold } // Generic.Strong\n .gu { color: #aaa } // Generic.Subheading\n .gt { color: #f07178 } // Generic.Traceback\n .kc { font-weight: bold } // Keyword.Constant\n .kd { font-weight: bold } // Keyword.Declaration\n .kp { font-weight: bold } // Keyword.Pseudo\n .kr { font-weight: bold } // Keyword.Reserved\n .kt { color: #FFCB6B; font-weight: bold } // Keyword.Type\n .m { color: #F78C6C } // Literal.Number\n .s { color: #C3E88D } // Literal.String\n .na { color: #008080 } // Name.Attribute\n .nb { color: #EEFFFF } // Name.Builtin\n .nc { color: #FFCB6B; font-weight: bold } // Name.Class\n .no { color: #008080 } // Name.Constant\n .ni { color: #800080 } // Name.Entity\n .ne { color: #900; font-weight: bold } // Name.Exception\n .nf { color: #82AAFF; font-weight: bold } // Name.Function\n .nn { color: #555 } // Name.Namespace\n .nt { color: #FFCB6B } // Name.Tag\n .nv { color: #EEFFFF } // Name.Variable\n .ow { font-weight: bold } // Operator.Word\n .w { color: #EEFFFF } // Text.Whitespace\n .mf { color: #F78C6C } // Literal.Number.Float\n .mh { color: #F78C6C } // Literal.Number.Hex\n .mi { color: #F78C6C } // Literal.Number.Integer\n .mo { color: #F78C6C } // Literal.Number.Oct\n .sb { color: #C3E88D } // Literal.String.Backtick\n .sc { color: #C3E88D } // Literal.String.Char\n .sd { color: #C3E88D } // Literal.String.Doc\n .s2 { color: #C3E88D } // Literal.String.Double\n .se { color: #EEFFFF } // Literal.String.Escape\n .sh { color: #C3E88D } // Literal.String.Heredoc\n .si { color: #C3E88D } // Literal.String.Interpol\n .sx { color: #C3E88D } // Literal.String.Other\n .sr { color: #C3E88D } // Literal.String.Regex\n .s1 { color: #C3E88D } // Literal.String.Single\n .ss { color: #C3E88D } // Literal.String.Symbol\n .bp { color: #999 } // Name.Builtin.Pseudo\n .vc { color: #FFCB6B } // Name.Variable.Class\n .vg { color: #EEFFFF } // Name.Variable.Global\n .vi { color: #EEFFFF } // Name.Variable.Instance\n .il { color: #F78C6C } // Literal.Number.Integer.Long\n }\n}\n\n\n// Mode selection\n// --------------\n\n\n// Classic skin (always light mode)\n// Assign outside of the if construct to establish global variable scope\n\n$brand-color: $lm-brand-color;\n$brand-color-light: $lm-brand-color-light;\n$brand-color-dark: $lm-brand-color-dark;\n\n$site-title-color: $lm-site-title-color;\n\n$text-color: $lm-text-color;\n$background-color: $lm-background-color;\n$code-background-color: $lm-code-background-color;\n\n$link-base-color: $lm-link-base-color;\n$link-visited-color: $lm-link-visited-color;\n$link-hover-color: $lm-link-hover-color;\n\n$border-color-01: $lm-border-color-01;\n$border-color-02: $lm-border-color-02;\n$border-color-03: $lm-border-color-03;\n\n$table-text-color: $lm-table-text-color;\n$table-zebra-color: $lm-table-zebra-color;\n$table-header-bg-color: $lm-table-header-bg-color;\n$table-header-border: $lm-table-header-border;\n$table-border-color: $lm-table-border-color;\n\n\n@if $color-scheme-auto {\n\n // Auto mode\n\n :root {\n --minima-brand-color: #{$lm-brand-color};\n --minima-brand-color-light: #{$lm-brand-color-light};\n --minima-brand-color-dark: #{$lm-brand-color-dark};\n\n --minima-site-title-color: #{$lm-site-title-color};\n\n --minima-text-color: #{$lm-text-color};\n --minima-background-color: #{$lm-background-color};\n --minima-code-background-color: #{$lm-code-background-color};\n\n --minima-link-base-color: #{$lm-link-base-color};\n --minima-link-visited-color: #{$lm-link-visited-color};\n --minima-link-hover-color: #{$lm-link-hover-color};\n\n --minima-border-color-01: #{$lm-border-color-01};\n --minima-border-color-02: #{$lm-border-color-02};\n --minima-border-color-03: #{$lm-border-color-03};\n\n --minima-table-text-color: #{$lm-table-text-color};\n --minima-table-zebra-color: #{$lm-table-zebra-color};\n --minima-table-header-bg-color: #{$lm-table-header-bg-color};\n --minima-table-header-border: #{$lm-table-header-border};\n --minima-table-border-color: #{$lm-table-border-color};\n }\n\n @include lm-highlight;\n\n @media (prefers-color-scheme: dark) {\n :root {\n --minima-brand-color: #{$dm-brand-color};\n --minima-brand-color-light: #{$dm-brand-color-light};\n --minima-brand-color-dark: #{$dm-brand-color-dark};\n\n --minima-site-title-color: #{$dm-site-title-color};\n\n --minima-text-color: #{$dm-text-color};\n --minima-background-color: #{$dm-background-color};\n --minima-code-background-color: #{$dm-code-background-color};\n\n --minima-link-base-color: #{$dm-link-base-color};\n --minima-link-visited-color: #{$dm-link-visited-color};\n --minima-link-hover-color: #{$dm-link-hover-color};\n\n --minima-border-color-01: #{$dm-border-color-01};\n --minima-border-color-02: #{$dm-border-color-02};\n --minima-border-color-03: #{$dm-border-color-03};\n\n --minima-table-text-color: #{$dm-table-text-color};\n --minima-table-zebra-color: #{$dm-table-zebra-color};\n --minima-table-header-bg-color: #{$dm-table-header-bg-color};\n --minima-table-header-border: #{$dm-table-header-border};\n --minima-table-border-color: #{$dm-table-border-color};\n }\n\n @include dm-highlight;\n }\n\n $brand-color: var(--minima-brand-color);\n $brand-color-light: var(--minima-brand-color-light);\n $brand-color-dark: var(--minima-brand-color-dark);\n\n $site-title-color: var(--minima-site-title-color);\n\n $text-color: var(--minima-text-color);\n $background-color: var(--minima-background-color);\n $code-background-color: var(--minima-code-background-color);\n\n $link-base-color: var(--minima-link-base-color);\n $link-visited-color: var(--minima-link-visited-color);\n $link-hover-color: var(--minima-link-hover-color);\n\n $border-color-01: var(--minima-border-color-01);\n $border-color-02: var(--minima-border-color-02);\n $border-color-03: var(--minima-border-color-03);\n\n $table-text-color: var(--minima-table-text-color);\n $table-zebra-color: var(--minima-table-zebra-color);\n $table-header-bg-color: var(--minima-table-header-bg-color);\n $table-header-border: var(--minima-table-header-border);\n $table-border-color: var(--minima-table-border-color);\n\n\n} @else if $color-scheme-dark {\n\n // Dark skin (always dark mode)\n\n $brand-color: $dm-brand-color;\n $brand-color-light: $dm-brand-color-light;\n $brand-color-dark: $dm-brand-color-dark;\n\n $site-title-color: $dm-site-title-color;\n\n $text-color: $dm-text-color;\n $background-color: $dm-background-color;\n $code-background-color: $dm-code-background-color;\n\n $link-base-color: $dm-link-base-color;\n $link-visited-color: $dm-link-visited-color;\n $link-hover-color: $dm-link-hover-color;\n\n $border-color-01: $dm-border-color-01;\n $border-color-02: $dm-border-color-02;\n $border-color-03: $dm-border-color-03;\n\n $table-text-color: $dm-table-text-color;\n $table-zebra-color: $dm-table-zebra-color;\n $table-header-bg-color: $dm-table-header-bg-color;\n $table-header-border: $dm-table-header-border;\n $table-border-color: $dm-table-border-color;\n\n @include dm-highlight;\n\n\n} @else {\n\n // Classic skin syntax highlighting\n @include lm-highlight;\n\n}\n",
- "@charset \"utf-8\";\n\n// Define defaults for each variable.\n\n$base-font-family: -apple-system, system-ui, BlinkMacSystemFont, \"Segoe UI\", \"Segoe UI Emoji\", \"Segoe UI Symbol\", \"Apple Color Emoji\", Roboto, Helvetica, Arial, sans-serif !default;\n$code-font-family: \"Menlo\", \"Inconsolata\", \"Consolas\", \"Roboto Mono\", \"Ubuntu Mono\", \"Liberation Mono\", \"Courier New\", monospace;\n$base-font-size: 16px !default;\n$base-font-weight: 400 !default;\n$small-font-size: $base-font-size * 0.875 !default;\n$base-line-height: 1.5 !default;\n\n$spacing-unit: 30px !default;\n\n$table-text-align: left !default;\n\n// Width of the content area\n$content-width: 800px !default;\n\n$on-palm: 600px !default;\n$on-laptop: 800px !default;\n\n$on-medium: $on-palm !default;\n$on-large: $on-laptop !default;\n\n// Use media queries like this:\n// @include media-query($on-palm) {\n// .wrapper {\n// padding-right: $spacing-unit / 2;\n// padding-left: $spacing-unit / 2;\n// }\n// }\n// Notice the following mixin uses max-width, in a deprecated, desktop-first\n// approach, whereas media queries used elsewhere now use min-width.\n@mixin media-query($device) {\n @media screen and (max-width: $device) {\n @content;\n }\n}\n\n@mixin relative-font-size($ratio) {\n font-size: #{$ratio}rem;\n}\n\n// Import pre-styling-overrides hook and style-partials.\n@import\n \"minima/custom-variables\", // Hook to override predefined variables.\n \"minima/base\", // Defines element resets.\n \"minima/layout\", // Defines structure and style based on CSS selectors.\n \"minima/custom-styles\" // Hook to override existing styles.\n;\n",
- "// override defalt minima variables \n\n// width of the content area\n// can be set as \"px\" or \"%\"\n$content-width: 1000px;\n$on-palm: 800px;\n$on-laptop: 1000px;\n$on-medium: 1000px;\n$on-large: 1200px;\n",
- "html {\n font-size: $base-font-size;\n}\n\n/**\n * Reset some basic elements\n */\nbody, h1, h2, h3, h4, h5, h6,\np, blockquote, pre, hr,\ndl, dd, ol, ul, figure {\n margin: 0;\n padding: 0;\n\n}\n\n\n\n/**\n * Basic styling\n */\nbody {\n font: $base-font-weight #{$base-font-size}/#{$base-line-height} $base-font-family;\n color: $text-color;\n background-color: $background-color;\n -webkit-text-size-adjust: 100%;\n -webkit-font-feature-settings: \"kern\" 1;\n -moz-font-feature-settings: \"kern\" 1;\n -o-font-feature-settings: \"kern\" 1;\n font-feature-settings: \"kern\" 1;\n font-kerning: normal;\n display: flex;\n min-height: 100vh;\n flex-direction: column;\n overflow-wrap: break-word;\n}\n\n\n\n/**\n * Set `margin-bottom` to maintain vertical rhythm\n */\nh1, h2, h3, h4, h5, h6,\np, blockquote, pre,\nul, ol, dl, figure,\n%vertical-rhythm {\n margin-bottom: $spacing-unit * .5;\n}\n\nhr {\n margin-top: $spacing-unit;\n margin-bottom: $spacing-unit;\n}\n\n/**\n * `main` element\n */\nmain {\n display: block; /* Default value of `display` of `main` element is 'inline' in IE 11. */\n}\n\n\n\n/**\n * Images\n */\nimg {\n max-width: 100%;\n vertical-align: middle;\n}\n\n\n\n/**\n * Figures\n */\nfigure > img {\n display: block;\n}\n\nfigcaption {\n font-size: $small-font-size;\n}\n\n\n\n/**\n * Lists\n */\nul, ol {\n margin-left: $spacing-unit;\n}\n\nli {\n > ul,\n > ol {\n margin-bottom: 0;\n }\n}\n\n\n\n/**\n * Headings\n */\nh1, h2, h3, h4, h5, h6 {\n font-weight: $base-font-weight;\n}\n\n\n\n/**\n * Links\n */\na {\n color: $link-base-color;\n text-decoration: none;\n\n &:visited {\n color: $link-visited-color;\n }\n\n &:hover {\n color: $link-hover-color;\n text-decoration: underline;\n }\n\n .social-media-list &:hover {\n text-decoration: none;\n\n .username {\n text-decoration: underline;\n }\n }\n}\n\n\n/**\n * Blockquotes\n */\nblockquote {\n color: $brand-color;\n border-left: 4px solid $border-color-01;\n padding-left: $spacing-unit * .5;\n @include relative-font-size(1.125);\n font-style: italic;\n\n > :last-child {\n margin-bottom: 0;\n }\n\n i, em {\n font-style: normal;\n }\n}\n\n\n\n/**\n * Code formatting\n */\npre,\ncode {\n font-family: $code-font-family;\n font-size: 0.9375em;\n border: 1px solid $border-color-01;\n border-radius: 3px;\n background-color: $code-background-color;\n}\n\ncode {\n padding: 1px 5px;\n}\n\npre {\n padding: 8px 12px;\n overflow-x: auto;\n\n > code {\n border: 0;\n padding-right: 0;\n padding-left: 0;\n }\n}\n\n.highlight {\n border-radius: 3px;\n background: $code-background-color;\n @extend %vertical-rhythm;\n\n .highlighter-rouge & {\n background: $code-background-color;\n }\n}\n\n\n\n/**\n * Wrapper\n */\n.wrapper {\n max-width: calc(#{$content-width} - (#{$spacing-unit}));\n margin-right: auto;\n margin-left: auto;\n padding-right: $spacing-unit * .5;\n padding-left: $spacing-unit * .5;\n @extend %clearfix;\n\n @media screen and (min-width: $on-large) {\n max-width: calc(#{$content-width} - (#{$spacing-unit} * 2));\n padding-right: $spacing-unit;\n padding-left: $spacing-unit;\n }\n}\n\n\n\n/**\n * Clearfix\n */\n%clearfix:after {\n content: \"\";\n display: table;\n clear: both;\n}\n\n\n\n/**\n * Icons\n */\n\n.orange {\n color: #f66a0a;\n}\n\n.grey {\n color: #828282;\n}\n\n.svg-icon {\n width: 1.25em;\n height: 1.25em;\n display: inline-block;\n fill: currentColor;\n vertical-align: text-bottom;\n}\n\n\n/**\n * Tables\n */\ntable {\n margin-bottom: $spacing-unit;\n width: 100%;\n text-align: $table-text-align;\n color: $table-text-color;\n border-collapse: collapse;\n border: 1px solid $table-border-color;\n tr {\n &:nth-child(even) {\n background-color: $table-zebra-color;\n }\n }\n th, td {\n padding: ($spacing-unit * 33.3333333333 * .01) ($spacing-unit * .5);\n }\n th {\n background-color: $table-header-bg-color;\n border: 1px solid $table-header-border;\n }\n td {\n border: 1px solid $table-border-color;\n }\n\n @include media-query($on-laptop) {\n display: block;\n overflow-x: auto;\n -webkit-overflow-scrolling: touch;\n -ms-overflow-style: -ms-autohiding-scrollbar;\n }\n}\n",
- "/**\n * Site header\n */\n.site-header {\n border-top: 5px solid $border-color-03;\n border-bottom: 1px solid $border-color-01;\n min-height: $spacing-unit * 1.865;\n line-height: $base-line-height * $base-font-size * 2.25;\n\n // Positioning context for the mobile navigation icon\n position: relative;\n}\n\n.site-title {\n @include relative-font-size(1.625);\n font-weight: 300;\n letter-spacing: -1px;\n margin-bottom: 0;\n float: left;\n\n @include media-query($on-palm) {\n padding-right: 45px;\n }\n\n &,\n &:visited {\n color: $site-title-color;\n }\n}\n\n.site-nav {\n position: absolute;\n top: 9px;\n right: $spacing-unit * .5;\n background-color: $background-color;\n border: 1px solid $border-color-01;\n border-radius: 5px;\n text-align: right;\n\n .nav-trigger {\n display: none;\n }\n\n .menu-icon {\n float: right;\n width: 36px;\n height: 26px;\n line-height: 0;\n padding-top: 10px;\n text-align: center;\n\n > svg path {\n fill: $border-color-03;\n }\n }\n\n label[for=\"nav-trigger\"] {\n display: block;\n float: right;\n width: 36px;\n height: 36px;\n z-index: 2;\n cursor: pointer;\n }\n\n input ~ .trigger {\n clear: both;\n display: none;\n }\n\n input:checked ~ .trigger {\n display: block;\n padding-bottom: 5px;\n }\n\n .page-link {\n color: $text-color;\n line-height: $base-line-height;\n display: block;\n padding: 5px 10px;\n\n // Gaps between nav items, but not on the last one\n &:not(:last-child) {\n margin-right: 0;\n }\n margin-left: 20px;\n }\n\n @media screen and (min-width: $on-medium) {\n position: static;\n float: right;\n border: none;\n background-color: inherit;\n\n label[for=\"nav-trigger\"] {\n display: none;\n }\n\n .menu-icon {\n display: none;\n }\n\n input ~ .trigger {\n display: block;\n }\n\n .page-link {\n display: inline;\n padding: 0;\n\n &:not(:last-child) {\n margin-right: 20px;\n }\n margin-left: auto;\n }\n }\n}\n\n\n\n/**\n * Site footer\n */\n.site-footer {\n border-top: 1px solid $border-color-01;\n padding: $spacing-unit 0;\n}\n\n.footer-heading {\n @include relative-font-size(1.125);\n margin-bottom: $spacing-unit * .5;\n}\n\n.feed-subscribe .svg-icon {\n padding: 5px 5px 2px 0\n}\n\n.contact-list,\n.social-media-list {\n list-style: none;\n margin-left: 0;\n}\n\n.footer-col-wrapper,\n.social-links {\n @include relative-font-size(0.9375);\n color: $brand-color;\n}\n\n.footer-col {\n margin-bottom: $spacing-unit * .5;\n}\n\n.footer-col-1,\n.footer-col-2 {\n width: calc(50% - (#{$spacing-unit} / 2));\n}\n\n.footer-col-3 {\n width: calc(100% - (#{$spacing-unit} / 2));\n}\n\n@media screen and (min-width: $on-large) {\n .footer-col-1 {\n width: calc(35% - (#{$spacing-unit} / 2));\n }\n\n .footer-col-2 {\n width: calc(20% - (#{$spacing-unit} / 2));\n }\n\n .footer-col-3 {\n width: calc(45% - (#{$spacing-unit} / 2));\n }\n}\n\n@media screen and (min-width: $on-medium) {\n .footer-col-wrapper {\n display: flex\n }\n\n .footer-col {\n width: calc(100% - (#{$spacing-unit} / 2));\n padding: 0 ($spacing-unit * .5);\n\n &:first-child {\n padding-right: $spacing-unit * .5;\n padding-left: 0;\n }\n\n &:last-child {\n padding-right: 0;\n padding-left: $spacing-unit * .5;\n }\n }\n}\n\n\n\n/**\n * Page content\n */\n.page-content {\n padding: $spacing-unit 0;\n flex: 1 0 auto;\n}\n\n.page-heading {\n @include relative-font-size(2);\n}\n\n.post-list-heading {\n @include relative-font-size(1.75);\n}\n\n.post-list {\n margin-left: 0;\n list-style: none;\n\n > li {\n margin-bottom: $spacing-unit;\n }\n}\n\n.post-meta {\n font-size: $small-font-size;\n color: $brand-color;\n}\n\n.post-link {\n display: block;\n @include relative-font-size(1.5);\n}\n\n\n\n/**\n * Posts\n */\n.post-header {\n margin-bottom: $spacing-unit;\n}\n\n.post-title,\n.post-content h1 {\n @include relative-font-size(2.625);\n letter-spacing: -1px;\n line-height: 1.15;\n\n @media screen and (min-width: $on-large) {\n @include relative-font-size(2.625);\n }\n}\n\n.post-content {\n margin-bottom: $spacing-unit;\n\n h1, h2, h3, h4, h5, h6 { margin-top: $spacing-unit }\n\n h2 {\n @include relative-font-size(1.75);\n\n @media screen and (min-width: $on-large) {\n @include relative-font-size(2);\n }\n }\n\n h3 {\n @include relative-font-size(1.375);\n\n @media screen and (min-width: $on-large) {\n @include relative-font-size(1.625);\n }\n }\n\n h4 {\n @include relative-font-size(1.25);\n }\n\n h5 {\n @include relative-font-size(1.125);\n }\n h6 {\n @include relative-font-size(1.0625);\n }\n}\n\n\n.social-media-list {\n display: table;\n margin: 0 auto;\n li {\n float: left;\n margin: 5px 10px 5px 0;\n &:last-of-type { margin-right: 0 }\n a {\n display: block;\n padding: 10px 12px;\n border: 1px solid $border-color-01;\n &:hover { border-color: $border-color-02 }\n }\n }\n}\n\n\n\n/**\n * Pagination navbar\n */\n.pagination {\n margin-bottom: $spacing-unit;\n @extend .social-media-list;\n li {\n a, div {\n min-width: 41px;\n text-align: center;\n box-sizing: border-box;\n }\n div {\n display: block;\n padding: $spacing-unit * .25;\n border: 1px solid transparent;\n\n &.pager-edge {\n color: $border-color-01;\n border: 1px dashed;\n }\n }\n }\n}\n\n\n\n/**\n * Grid helpers\n */\n@media screen and (min-width: $on-large) {\n .one-half {\n width: calc(50% - (#{$spacing-unit} / 2));\n }\n}\n",
- "/*-----------------------------------*/\n/*--- IMPORT STYLES FOR FASTPAGES ---*/\n@import \"minima/fastpages-styles\"; \n\n\n\n/*-----------------------------------*/\n/*----- ADD YOUR STYLES BELOW -------*/\n\n// If you want to turn off dark background for syntax highlighting, comment or delete the below line.\n@import \"minima/fastpages-dracula-highlight\"; \n",
- "//Default Overrides For Styles In Minima \n// If you wish to override any of this CSS, do so in _sass/minima/custom-styles.css\n\n.post img {\n display: block;\n // border:1px solid #021a40;\n vertical-align: top;\n margin-left: auto;\n margin-right: auto;\n}\n\nimg.emoji{\n display: inline !important;\n vertical-align: baseline !important;\n}\n\n.post figcaption {\n text-align: center;\n font-size: .8rem;\n font-style: italic;\n color: light-grey;\n}\n\n.page-content {\n -webkit-font-smoothing: antialiased !important;\n text-rendering: optimizeLegibility !important;\n font-family: \"Segoe UI\", SegoeUI, Roboto, \"Segoe WP\", \"Helvetica Neue\", \"Helvetica\", \"Tahoma\", \"Arial\", sans-serif !important;\n}\n\n// make non-headings slightly lighter\n.post-content p, .post-content li {\n font-size: 20px;\n color: #515151;\n}\n\n.post-link{\n font-weight: normal;\n}\n\n// change padding of headings\nh1 {\n margin-top:2.5rem !important;\n}\n\nh2 {\n margin-top:2rem !important; \n}\n\nh3, h4 {\n margin-top:1.5rem !important; \n}\n\np {\n margin-top:1rem !important; \n margin-bottom:1rem !important; \n}\n\nh1, h2, h3, h4 {\n font-weight: normal !important;\n margin-bottom:0.5rem !important; \n code {\n font-size: 100%;\n }\n}\n\npre {\n margin-bottom:1.5rem !important; \n}\n\n// make sure the post title doesn't have too much spacing\n.post-title { margin-top: .5rem !important; }\n\nli {\n h3, h4 {\n margin-top:.05rem !important;\n margin-bottom:.05rem !important;\n }\n .post-meta-description {\n color: rgb(88, 88, 88);\n font-size: 15px;\n margin-top:.05rem !important;\n margin-bottom:.05rem !important;\n }\n}\n\n\n\n// Code Folding\n details.description[open] summary::after {\n content: attr(data-open);\n}\n\ndetails.description:not([open]) summary::after {\n content: attr(data-close);\n}\n\n// Notebook badges\n.notebook-badge-image {\n border:0 !important;\n}\n\n// Adjust font size for footnotes.\n.footnotes {\n font-size: 12px !important;\n p, li{\n font-size: 12px !important;\n } \n}\n\n// Adjust with of social media icons were getting cut off\n.social-media-list{\n .svg-icon {\n width: 25px !important;\n height: 23px !important;\n }\n}\n\n// Make Anchor Links Appear Only on Hover\n\n.anchor-link {\n opacity: 0;\n padding-left: 0.375em;\n \\-webkit-text-stroke: 1.75px white;\n \\-webkit-transition: opacity 0.2s ease-in-out 0.1s;\n \\-moz-transition: opacity 0.2s ease-in-out 0.1s;\n \\-ms-transition: opacity 0.2s ease-in-out 0.1s;\n}\n\nh1:hover .anchor-link,\nh2:hover .anchor-link,\nh3:hover .anchor-link,\nh4:hover .anchor-link,\nh5:hover .anchor-link,\nh6:hover .anchor-link {\n opacity: 1;\n}\n\n\n// category tags\n.category-tags {\n margin-top: .25rem !important;\n margin-bottom: .25rem !important;\n font-size: 105%;\n}\n\n// Custom styling for homepage post previews\n.post-meta-title, .post-meta{\n margin-top: .25em !important;\n margin-bottom: .25em !important;\n font-size: 105%;\n}\n\n.page-description {\n margin-top: .5rem !important;\n margin-bottom: .5rem !important;\n color: #585858;\n font-size: 115%;\n}\n\n// Custom styling for category tags\n.category-tags-icon {\n font-size: 75% !important;\n padding-left: 0.375em;\n opacity: 35%;\n}\n.category-tags-link {\n color:rgb(187, 129, 129) !important;\n font-size: 13px !important;\n}\n\n// Search Page Styles\n.js-search-results {padding-top: 0.2rem;}\n.search-results-list-item {padding-bottom: 1rem;}\n.search-results-list-item .search-result-title {\n font-size: 16px;\n color: #d9230f;\n}\n.search-result-rel-url {color: silver;}\n.search-results-list-item a {display: block; color: #777;}\n.search-results-list-item a:hover, .search-results-list-item a:focus {text-decoration: none;}\n.search-results-list-item a:hover .search-result-title {text-decoration: underline;}\n\n.search-result-rel-date {\n color: rgb(109, 120, 138);\n font-size: 14px;\n}\n\n.search-result-preview {\n color: #777;\n font-size: 16px;\n margin-top:.02rem !important;\n margin-bottom:.02rem !important;\n}\n.search-result-highlight {\n color: #2e0137;\n font-weight:bold;\n}\n\ntable {\n white-space: normal;\n max-width: 100%;\n font-size: 100%;\n border:none;\n th{\n text-align: center! important;\n }\n}\n\n// customize scrollbars\n::-webkit-scrollbar {\n width: 14px;\n height: 18px;\n}\n::-webkit-scrollbar-thumb {\n height: 6px;\n border: 4px solid rgba(0, 0, 0, 0);\n background-clip: padding-box;\n -webkit-border-radius: 7px;\n background-color: #9D9D9D;\n -webkit-box-shadow: inset -1px -1px 0px rgba(0, 0, 0, 0.05), inset 1px 1px 0px rgba(0, 0, 0, 0.05);\n}\n::-webkit-scrollbar-button {\n width: 0;\n height: 0;\n display: none;\n}\n::-webkit-scrollbar-corner {\n background-color: transparent;\n}\n\n// Wrap text outputs instead of horizontal scroll\n.output_text.output_execute_result {\n pre{\n white-space: pre-wrap;\n }\n}\n\n\n// Handling large charts on mobile devices\n// @media only screen and (max-width: 1200px) {\n// /* for mobile phone and tablet devices */\n// .output_wrapper{\n// overflow: scroll;\n// }\n// }\n\n.svg-icon.orange{\n width: 30px;\n height: 23px;\n}\n\n.code_cell {\n margin: 1.5rem 0px !important;\n}\n\npre code {\n font-size: 15px !important;\n}\n",
- "// Override Syntax Highlighting In Minima With the Dracula Theme: https://draculatheme.com/\n// If you wish to override any of this CSS, do so in _sass/minima/custom-styles.css\n \n$dt-gray-dark: #282a36; // Background\n$dt-code-cell-background: #323443;\n$dt-gray: #44475a; // Current Line & Selection\n$dt-gray-light: #f8f8f2; // Foreground\n$dt-blue: #6272a4; // Comment\n$dt-cyan: #8be9fd;\n$dt-green: #50fa7b;\n$dt-orange: #ffb86c;\n$dt-pink: #ff79c6;\n$dt-purple: #bd93f9;\n$dt-red: #ff5555;\n$dt-yellow: #f1fa8c;\n$dt-green-light: rgb(172, 229, 145);\n\n.language-python + .language-plaintext {\n border-left: 1px solid grey;\n margin-left: 1rem !important;\n}\n\n// ensure dark background for code in markdown\n[class^=\"language-\"]:not(.language-plaintext) pre,\n[class^=\"language-\"]:not(.language-plaintext) code {\n background-color: $dt-code-cell-background !important;\n color: $dt-gray-light;\n}\n\n.language-python + .language-plaintext code { background-color: white !important; }\n.language-python + .language-plaintext pre { background-color: white !important; }\n\n// for Jupyter Notebook HTML Code Cells modified from https://www.fast.ai/public/css/hyde.css\n\n.input_area pre, .input_area div {\n margin-bottom:1.0rem !important;\n margin-top:1.5rem !important;\n padding-bottom:0 !important;\n padding-top:0 !important;\n background: #323443 !important;\n -webkit-font-smoothing: antialiased;\n text-rendering: optimizeLegibility;\n font-family: Menlo, Monaco, Consolas, \"Lucida Console\", Roboto, Ubuntu, monospace;\n border-radius: 5px;\n font-size: 105%;\n}\n.output_area pre, .output_area div {\n margin-bottom:1rem !important;\n margin-top:0rem !important;\n padding-bottom:0 !important;\n padding-top:0 !important;\n}\n.input_area pre {\n border-left: 1px solid lightcoral;\n}\n.output_area pre {\n border-left: 1px solid grey;\n margin-left: 1rem !important;\n font-size: 16px;\n}\n\n.code_cell table { width: auto; }\n\n/* Dracula Theme v1.2.5\n *\n * https://github.com/zenorocha/dracula-theme\n *\n * Copyright 2016, All rights reserved\n *\n * Code licensed under the MIT license\n *\n */\n\n.highlight {\n background: $dt-code-cell-background !important;\n color: $dt-gray-light !important;\n pre, code {\n background: $dt-code-cell-background;\n color: $dt-gray-light;\n font-size: 110%;\n }\n \n .hll,\n .s,\n .sa,\n .sb,\n .sc,\n .dl,\n .sd,\n .s2,\n .se,\n .sh,\n .si,\n .sx,\n .sr,\n .s1,\n .ss {\n color:rgb(231, 153, 122);\n }\n \n .go {\n color: $dt-gray;\n }\n \n .err,\n .g,\n .l,\n .n,\n .x,\n .ge,\n .gr,\n .gh,\n .gi,\n .gp,\n .gs,\n .gu,\n .gt,\n .ld,\n .no,\n .nd,\n .pi,\n .ni,\n .ne,\n .nn,\n .nx,\n .py,\n .w,\n .bp {\n color: $dt-gray-light;\n background-color: $dt-code-cell-background !important;\n }\n \n .p {\n font-weight: bold;\n color: rgb(102, 217, 239);\n }\n \n .ge {\n text-decoration: underline;\n }\n \n .bp {\n font-style: italic;\n }\n \n .c,\n .ch,\n .cm,\n .cpf,\n .cs {\n color: $dt-blue;\n }\n\n .c1 {\n color: gray;\n }\n \n .kd,\n .kt,\n .nb,\n .nl,\n .nv,\n .vc,\n .vg,\n .vi,\n .vm {\n color: $dt-cyan;\n }\n \n .kd,\n .nb,\n .nl,\n .nv,\n .vc,\n .vg,\n .vi,\n .vm {\n font-style: italic;\n }\n \n .fm,\n .na,\n .nc,\n .nf\n {\n color: $dt-green-light;\n }\n \n .k,\n .o,\n .cp,\n .kc,\n .kn,\n .kp,\n .kr,\n .nt,\n .ow {\n color: $dt-pink;\n }\n \n .kc {\n color: $dt-green-light; \n }\n \n .m,\n .mb,\n .mf,\n .mh,\n .mi,\n .mo,\n .il {\n color: $dt-purple;\n }\n \n .gd {\n color: $dt-red;\n }\n}\n\np code{\n font-size: 19px;\n}\n"
- ],
- "names": [],
- "mappings": "AEyCE,AACE,UADQ,CACR,EAAE,CAAK,EAAE,KAAK,EAAE,IAAI,EAAE,UAAU,EAAE,MAAO,GAAE;;AAD7C,AAEE,UAFQ,CAER,IAAI,CAAG,EAAE,KAAK,EAAE,OAAO,EAAE,gBAAgB,EAAE,OAAQ,GAAE;;AAFvD,AAGE,UAHQ,CAGR,EAAE,CAAK,EAAE,WAAW,EAAE,IAAK,GAAE;;AAH/B,AAIE,UAJQ,CAIR,EAAE,CAAK,EAAE,WAAW,EAAE,IAAK,GAAE;;AAJ/B,AAKE,UALQ,CAKR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAI,EAAE,UAAU,EAAE,MAAO,GAAE;;AAL7C,AAME,UANQ,CAMR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAI,EAAE,WAAW,EAAE,IAAK,GAAE;;AAN5C,AAOE,UAPQ,CAOR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAI,EAAE,UAAU,EAAE,MAAO,GAAE;;AAP7C,AAQE,UARQ,CAQR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAI,EAAE,WAAW,EAAE,IAAI,EAAE,UAAU,EAAE,MAAO,GAAE;;AARhE,AASE,UATQ,CASR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAI,EAAE,gBAAgB,EAAE,IAAK,GAAE;;AATjD,AAUE,UAVQ,CAUR,GAAG,CAAC,EAAE,CAAC,EAAE,KAAK,EAAE,IAAI,EAAE,gBAAgB,EAAE,IAAK,GAAE;;AAVjD,AAWE,UAXQ,CAWR,GAAG,CAAI,EAAE,UAAU,EAAE,MAAO,GAAE;;AAXhC,AAYE,UAZQ,CAYR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AAZzB,AAaE,UAbQ,CAaR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AAbzB,AAcE,UAdQ,CAcR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAI,EAAE,gBAAgB,EAAE,IAAK,GAAE;;AAdjD,AAeE,UAfQ,CAeR,GAAG,CAAC,EAAE,CAAC,EAAE,KAAK,EAAE,IAAI,EAAE,gBAAgB,EAAE,IAAK,GAAE;;AAfjD,AAgBE,UAhBQ,CAgBR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AAhBzB,AAiBE,UAjBQ,CAiBR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AAjBzB,AAkBE,UAlBQ,CAkBR,GAAG,CAAI,EAAE,WAAW,EAAE,IAAK,GAAE;;AAlB/B,AAmBE,UAnBQ,CAmBR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AAnBzB,AAoBE,UApBQ,CAoBR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AApBzB,AAqBE,UArBQ,CAqBR,GAAG,CAAI,EAAE,WAAW,EAAE,IAAK,GAAE;;AArB/B,AAsBE,UAtBQ,CAsBR,GAAG,CAAI,EAAE,WAAW,EAAE,IAAK,GAAE;;AAtB/B,AAuBE,UAvBQ,CAuBR,GAAG,CAAI,EAAE,WAAW,EAAE,IAAK,GAAE;;AAvB/B,AAwBE,UAxBQ,CAwBR,GAAG,CAAI,EAAE,WAAW,EAAE,IAAK,GAAE;;AAxB/B,AAyBE,UAzBQ,CAyBR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAI,EAAE,WAAW,EAAE,IAAK,GAAE;;AAzB5C,AA0BE,UA1BQ,CA0BR,EAAE,CAAK,EAAE,KAAK,EAAE,IAAK,GAAE;;AA1BzB,AA2BE,UA3BQ,CA2BR,EAAE,CAAK,EAAE,KAAK,EAAE,IAAK,GAAE;;AA3BzB,AA4BE,UA5BQ,CA4BR,GAAG,CAAI,EAAE,KAAK,EAAE,OAAQ,GAAE;;AA5B5B,AA6BE,UA7BQ,CA6BR,GAAG,CAAI,EAAE,KAAK,EAAE,OAAQ,GAAE;;AA7B5B,AA8BE,UA9BQ,CA8BR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAI,EAAE,WAAW,EAAE,IAAK,GAAE;;AA9B5C,AA+BE,UA/BQ,CA+BR,GAAG,CAAI,EAAE,KAAK,EAAE,OAAQ,GAAE;;AA/B5B,AAgCE,UAhCQ,CAgCR,GAAG,CAAI,EAAE,KAAK,EAAE,OAAQ,GAAE;;AAhC5B,AAiCE,UAjCQ,CAiCR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAI,EAAE,WAAW,EAAE,IAAK,GAAE;;AAjC5C,AAkCE,UAlCQ,CAkCR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAI,EAAE,WAAW,EAAE,IAAK,GAAE;;AAlC5C,AAmCE,UAnCQ,CAmCR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AAnCzB,AAoCE,UApCQ,CAoCR,GAAG,CAAI,EAAE,KAAK,EAAE,OAAQ,GAAE;;AApC5B,AAqCE,UArCQ,CAqCR,GAAG,CAAI,EAAE,KAAK,EAAE,OAAQ,GAAE;;AArC5B,AAsCE,UAtCQ,CAsCR,GAAG,CAAI,EAAE,WAAW,EAAE,IAAK,GAAE;;AAtC/B,AAuCE,UAvCQ,CAuCR,EAAE,CAAK,EAAE,KAAK,EAAE,IAAK,GAAE;;AAvCzB,AAwCE,UAxCQ,CAwCR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AAxCzB,AAyCE,UAzCQ,CAyCR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AAzCzB,AA0CE,UA1CQ,CA0CR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AA1CzB,AA2CE,UA3CQ,CA2CR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AA3CzB,AA4CE,UA5CQ,CA4CR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AA5CzB,AA6CE,UA7CQ,CA6CR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AA7CzB,AA8CE,UA9CQ,CA8CR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AA9CzB,AA+CE,UA/CQ,CA+CR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AA/CzB,AAgDE,UAhDQ,CAgDR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AAhDzB,AAiDE,UAjDQ,CAiDR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AAjDzB,AAkDE,UAlDQ,CAkDR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AAlDzB,AAmDE,UAnDQ,CAmDR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AAnDzB,AAoDE,UApDQ,CAoDR,GAAG,CAAI,EAAE,KAAK,EAAE,OAAQ,GAAE;;AApD5B,AAqDE,UArDQ,CAqDR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AArDzB,AAsDE,UAtDQ,CAsDR,GAAG,CAAI,EAAE,KAAK,EAAE,OAAQ,GAAE;;AAtD5B,AAuDE,UAvDQ,CAuDR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AAvDzB,AAwDE,UAxDQ,CAwDR,GAAG,CAAI,EAAE,KAAK,EAAE,OAAQ,GAAE;;AAxD5B,AAyDE,UAzDQ,CAyDR,GAAG,CAAI,EAAE,KAAK,EAAE,OAAQ,GAAE;;AAzD5B,AA0DE,UA1DQ,CA0DR,GAAG,CAAI,EAAE,KAAK,EAAE,OAAQ,GAAE;;AA1D5B,AA2DE,UA3DQ,CA2DR,GAAG,CAAI,EAAE,KAAK,EAAE,IAAK,GAAE;;AGpG3B,AAAA,IAAI,CAAC,EACH,SAAS,EFKQ,IAAI,GEJtB;;AAED,gCAEG;AACH,AAAA,IAAI,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAC5B,CAAC,EAAE,UAAU,EAAE,GAAG,EAAE,EAAE,EACtB,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,MAAM,CAAC,EACrB,MAAM,EAAE,CAAC,EACT,OAAO,EAAE,CAAC,GAEX;;AAID,oBAEG;AACH,AAAA,IAAI,CAAC,EACH,IAAI,EFda,GAAG,CEcI,QAAuE,CFjB9E,aAAa,EAAE,SAAS,EAAE,kBAAkB,EAAE,UAAU,EAAE,gBAAgB,EAAE,iBAAiB,EAAE,mBAAmB,EAAE,MAAM,EAAE,SAAS,EAAE,KAAK,EAAE,UAAU,EEkBzK,KAAK,EHJoB,OAAO,EGKhC,gBAAgB,EHJS,OAAO,EGKhC,wBAAwB,EAAE,IAAI,EAC9B,6BAA6B,EAAE,QAAQ,EACpC,0BAA0B,EAAE,QAAQ,EAClC,wBAAwB,EAAE,QAAQ,EAC/B,qBAAqB,EAAE,QAAQ,EACvC,YAAY,EAAE,MAAM,EACpB,OAAO,EAAE,IAAI,EACb,UAAU,EAAE,KAAK,EACjB,cAAc,EAAE,MAAM,EACtB,aAAa,EAAE,UAAU,GAC1B;;AAID,sDAEG;AACH,AAAA,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EACtB,CAAC,EAAE,UAAU,EAAE,GAAG,EAClB,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,MAAM,EA6IlB,UAAU,CA5IO,EACf,aAAa,EAAE,IAAkB,GAClC;;AAED,AAAA,EAAE,CAAC,EACD,UAAU,EFtCO,IAAI,EEuCrB,aAAa,EFvCI,IAAI,GEwCtB;;AAED,qBAEG;AACH,AAAA,IAAI,CAAC,EACH,OAAO,EAAE,KAAK,EAAE,wEAAwE,EACzF;;AAID,aAEG;AACH,AAAA,GAAG,CAAC,EACF,SAAS,EAAE,IAAI,EACf,cAAc,EAAE,MAAM,GACvB;;AAID,cAEG;AACH,AAAA,MAAM,GAAG,GAAG,CAAC,EACX,OAAO,EAAE,KAAK,GACf;;AAED,AAAA,UAAU,CAAC,EACT,SAAS,EFxEQ,IAAuB,GEyEzC;;AAID,YAEG;AACH,AAAA,EAAE,EAAE,EAAE,CAAC,EACL,WAAW,EF9EM,IAAI,GE+EtB;;AAED,AACE,EADA,GACE,EAAE,EADN,EAAE,GAEE,EAAE,CAAC,EACH,aAAa,EAAE,CAAC,GACjB;;AAKH,eAEG;AACH,AAAA,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,CAAC,EACrB,WAAW,EFlGM,GAAG,GEmGrB;;AAID,YAEG;AACH,AAAA,CAAC,CAAC,EACA,KAAK,EH5FoB,OAAO,EG6FhC,eAAe,EAAE,IAAI,GAkBtB;;AApBD,AAIE,CAJD,CAIG,OAAO,CAAC,EACR,KAAK,EHhGkB,OAAO,GGiG/B;;AANH,AAQE,CARD,CAQG,KAAK,CAAC,EACN,KAAK,EHxGkB,OAAO,EGyG9B,eAAe,EAAE,SAAS,GAC3B;;AAED,AAAA,kBAAkB,CAbpB,CAAC,CAasB,KAAK,ECuL5B,WAAW,CDpMX,CAAC,CAasB,KAAK,CAAC,EACzB,eAAe,EAAE,IAAI,GAKtB;;AAND,AAGE,kBAHgB,CAbpB,CAAC,CAasB,KAAK,CAGxB,SAAS,ECoLb,WAAW,CDpMX,CAAC,CAasB,KAAK,CAGxB,SAAS,CAAC,EACR,eAAe,EAAE,SAAS,GAC3B;;AAKL,kBAEG;AACH,AAAA,UAAU,CAAC,EACT,KAAK,EHhIoB,OAAO,EGiIhC,WAAW,EAAE,GAAG,CAAC,KAAK,CHjIG,OAAO,EGkIhC,YAAY,EAAE,IAAkB,EFtGhC,SAAS,EAAC,QAAC,EEwGX,UAAU,EAAE,MAAM,GASnB;;AAdD,AAOE,UAPQ,IAOL,UAAU,CAAC,EACZ,aAAa,EAAE,CAAC,GACjB;;AATH,AAWE,UAXQ,CAWR,CAAC,EAXH,UAAU,CAWL,EAAE,CAAC,EACJ,UAAU,EAAE,MAAM,GACnB;;AAKH,sBAEG;AACH,AAAA,GAAG,EACH,IAAI,CAAC,EACH,WAAW,EF7JM,OAAO,EAAE,aAAa,EAAE,UAAU,EAAE,aAAa,EAAE,aAAa,EAAE,iBAAiB,EAAE,aAAa,EAAE,SAAS,EE8J9H,SAAS,EAAE,QAAQ,EACnB,MAAM,EAAE,GAAG,CAAC,KAAK,CHxJQ,OAAO,EGyJhC,aAAa,EAAE,GAAG,EAClB,gBAAgB,EHlJS,OAAO,GGmJjC;;AAED,AAAA,IAAI,CAAC,EACH,OAAO,EAAE,OAAO,GACjB;;AAED,AAAA,GAAG,CAAC,EACF,OAAO,EAAE,QAAQ,EACjB,UAAU,EAAE,IAAI,GAOjB;;AATD,AAIE,GAJC,GAIC,IAAI,CAAC,EACL,MAAM,EAAE,CAAC,EACT,aAAa,EAAE,CAAC,EAChB,YAAY,EAAE,CAAC,GAChB;;AAGH,AAAA,UAAU,CAAC,EACT,aAAa,EAAE,GAAG,EAClB,UAAU,EHtKe,OAAO,GG4KjC;;AAHC,AAAA,kBAAkB,CALpB,UAAU,CAKa,EACnB,UAAU,EH1Ka,OAAO,GG2K/B;;AAKH,cAEG;AACH,AAAA,QAAQ,CAAC,EACP,SAAS,EAAE,qBAAkG,EAC7G,YAAY,EAAE,IAAI,EAClB,WAAW,EAAE,IAAI,EACjB,aAAa,EAAE,IAAkB,EACjC,YAAY,EAAE,IAAkB,GAQjC;;AALC,MAAM,+BARR,GAAA,AAAA,QAAQ,CAAC,EASL,SAAS,EAAE,yBAAsG,EACjH,aAAa,EFtME,IAAI,EEuMnB,YAAY,EFvMG,IAAI,GEyMtB,EAAA;;AAID,eAEG;AAnBH,AAoBA,QApBQ,CAoBE,KAAK,CAAC,EACd,OAAO,EAAE,EAAE,EACX,OAAO,EAAE,KAAK,EACd,KAAK,EAAE,IAAI,GACZ;;AAID,YAEG;AAEH,AAAA,OAAO,CAAC,EACN,KAAK,EAAE,OAAO,GACf;;AAED,AAAA,KAAK,CAAC,EACJ,KAAK,EAAE,OAAO,GACf;;AAED,AAAA,SAAS,CAAC,EACR,KAAK,EAAE,MAAM,EACb,MAAM,EAAE,MAAM,EACd,OAAO,EAAE,YAAY,EACrB,IAAI,EAAE,YAAY,EAClB,cAAc,EAAE,WAAW,GAC5B;;AAGD,aAEG;AACH,AAAA,KAAK,CAAC,EACJ,aAAa,EFjPI,IAAI,EEkPrB,KAAK,EAAE,IAAI,EACX,UAAU,EFjPO,IAAI,EEkPrB,KAAK,EH7OoB,OAAO,EG8OhC,eAAe,EAAE,QAAQ,EACzB,MAAM,EAAE,GAAG,CAAC,KAAK,CHrPQ,OAAO,GG4QjC;;AA7BD,AAQI,KARC,CAOH,EAAE,CACE,SAAU,CAAA,IAAI,EAAE,EAChB,gBAAgB,EHxPK,OAAO,GGyP7B;;AAVL,AAYE,KAZG,CAYH,EAAE,EAZJ,KAAK,CAYC,EAAE,CAAC,EACL,OAAO,EAAE,IAAqC,CAAC,IAAoB,GACpE;;AAdH,AAeE,KAfG,CAeH,EAAE,CAAC,EACD,gBAAgB,EH/PO,OAAO,EGgQ9B,MAAM,EAAE,GAAG,CAAC,KAAK,CHhQM,OAAO,GGiQ/B;;AAlBH,AAmBE,KAnBG,CAmBH,EAAE,CAAC,EACD,MAAM,EAAE,GAAG,CAAC,KAAK,CHnQM,OAAO,GGoQ/B;;AF9OD,MAAM,+BEyNR,GAAA,AAAA,KAAK,CAAC,EAwBF,OAAO,EAAE,KAAK,EACd,UAAU,EAAE,IAAI,EAChB,0BAA0B,EAAE,KAAK,EACzB,kBAAkB,EAAE,wBAAwB,GAEvD,EAAA;;ACxRD,kBAEG;AACH,AAAA,YAAY,CAAC,EACX,UAAU,EAAE,GAAG,CAAC,KAAK,CJQI,OAAO,EIPhC,aAAa,EAAE,GAAG,CAAC,KAAK,CJOC,OAAO,EINhC,UAAU,EAAE,OAAqB,EACjC,WAAW,EAAE,IAA0C,EAGvD,QAAQ,EAAE,QAAQ,GACnB;;AAED,AAAA,WAAW,CAAC,EH2BV,SAAS,EAAC,QAAC,EGzBX,WAAW,EAAE,GAAG,EAChB,cAAc,EAAE,IAAI,EACpB,aAAa,EAAE,CAAC,EAChB,KAAK,EAAE,IAAI,GAUZ;;AHMC,MAAM,8BGrBR,GAAA,AAAA,WAAW,CAAC,EAQR,aAAa,EAAE,IAAI,GAOtB,EAAA;;AAfD,AAWE,WAXS,EAAX,WAAW,CAYP,OAAO,CAAC,EACR,KAAK,EJdkB,OAAO,GIe/B;;AAGH,AAAA,SAAS,CAAC,EACR,QAAQ,EAAE,QAAQ,EAClB,GAAG,EAAE,GAAG,EACR,KAAK,EAAE,IAAkB,EACzB,gBAAgB,EJfS,OAAO,EIgBhC,MAAM,EAAE,GAAG,CAAC,KAAK,CJvBQ,OAAO,EIwBhC,aAAa,EAAE,GAAG,EAClB,UAAU,EAAE,KAAK,GA+ElB;;AAtFD,AASE,SATO,CASP,YAAY,CAAC,EACX,OAAO,EAAE,IAAI,GACd;;AAXH,AAaE,SAbO,CAaP,UAAU,CAAC,EACT,KAAK,EAAE,KAAK,EACZ,KAAK,EAAE,IAAI,EACX,MAAM,EAAE,IAAI,EACZ,WAAW,EAAE,CAAC,EACd,WAAW,EAAE,IAAI,EACjB,UAAU,EAAE,MAAM,GAKnB;;AAxBH,AAqBI,SArBK,CAaP,UAAU,GAQN,GAAG,CAAC,IAAI,CAAC,EACT,IAAI,EJxCiB,OAAO,GIyC7B;;AAvBL,AA0BE,SA1BO,CA0BP,KAAK,CAAA,AAAA,GAAC,CAAI,aAAa,AAAjB,EAAmB,EACvB,OAAO,EAAE,KAAK,EACd,KAAK,EAAE,KAAK,EACZ,KAAK,EAAE,IAAI,EACX,MAAM,EAAE,IAAI,EACZ,OAAO,EAAE,CAAC,EACV,MAAM,EAAE,OAAO,GAChB;;AAjCH,AAmCE,SAnCO,CAmCP,KAAK,GAAG,QAAQ,CAAC,EACf,KAAK,EAAE,IAAI,EACX,OAAO,EAAE,IAAI,GACd;;AAtCH,AAwCE,SAxCO,CAwCP,KAAK,CAAC,OAAO,GAAG,QAAQ,CAAC,EACvB,OAAO,EAAE,KAAK,EACd,cAAc,EAAE,GAAG,GACpB;;AA3CH,AA6CE,SA7CO,CA6CP,UAAU,CAAC,EACT,KAAK,EJ1DkB,OAAO,EI2D9B,WAAW,EHpEI,GAAG,EGqElB,OAAO,EAAE,KAAK,EACd,OAAO,EAAE,QAAQ,EAMjB,WAAW,EAAE,IAAI,GAClB;;AAxDH,AAoDI,SApDK,CA6CP,UAAU,CAOP,GAAK,EAAC,UAAU,EAAE,EACjB,YAAY,EAAE,CAAC,GAChB;;AAIH,MAAM,+BA1DR,GAAA,AAAA,SAAS,CAAC,EA2DN,QAAQ,EAAE,MAAM,EAChB,KAAK,EAAE,KAAK,EACZ,MAAM,EAAE,IAAI,EACZ,gBAAgB,EAAE,OAAO,GAwB5B,CAtFD,AAgEI,SAhEK,CAgEL,KAAK,CAAA,AAAA,GAAC,CAAI,aAAa,AAAjB,EAAmB,EACvB,OAAO,EAAE,IAAI,GACd,CAlEL,AAoEI,SApEK,CAoEL,UAAU,CAAC,EACT,OAAO,EAAE,IAAI,GACd,CAtEL,AAwEI,SAxEK,CAwEL,KAAK,GAAG,QAAQ,CAAC,EACf,OAAO,EAAE,KAAK,GACf,CA1EL,AA4EI,SA5EK,CA4EL,UAAU,CAAC,EACT,OAAO,EAAE,MAAM,EACf,OAAO,EAAE,CAAC,EAKV,WAAW,EAAE,IAAI,GAClB,CApFL,AAgFM,SAhFG,CA4EL,UAAU,CAIP,GAAK,EAAC,UAAU,EAAE,EACjB,YAAY,EAAE,IAAI,GACnB,EAIN;;AAID,kBAEG;AACH,AAAA,YAAY,CAAC,EACX,UAAU,EAAE,GAAG,CAAC,KAAK,CJhHI,OAAO,EIiHhC,OAAO,EHlHU,IAAI,CGkHE,CAAC,GACzB;;AAED,AAAA,eAAe,CAAC,EHxFd,SAAS,EAAC,QAAC,EG0FX,aAAa,EAAE,IAAkB,GAClC;;AAED,AAAA,eAAe,CAAC,SAAS,CAAC,EACxB,OAAO,EAAE,aACX,GAAC;;AAED,AAAA,aAAa,EACb,kBAAkB,EA2KlB,WAAW,CA3KQ,EACjB,UAAU,EAAE,IAAI,EAChB,WAAW,EAAE,CAAC,GACf;;AAED,AAAA,mBAAmB,EACnB,aAAa,CAAC,EHxGZ,SAAS,EAAC,SAAC,EG0GX,KAAK,EJtIoB,OAAO,GIuIjC;;AAED,AAAA,WAAW,CAAC,EACV,aAAa,EAAE,IAAkB,GAClC;;AAED,AAAA,aAAa,EACb,aAAa,CAAC,EACZ,KAAK,EAAE,sBAA4D,GACpE;;AAED,AAAA,aAAa,CAAC,EACZ,KAAK,EAAE,uBAA6D,GACrE;;AAED,MAAM,+BACJ,GAAA,AAAA,aAAa,CAAC,EACZ,KAAK,EAAE,sBAA4D,GACpE,CAED,AAAA,aAAa,CAAC,EACZ,KAAK,EAAE,sBAA4D,GACpE,CAED,AAAA,aAAa,CAAC,EACZ,KAAK,EAAE,sBAA4D,GACpE,EARA;;AAWH,MAAM,+BACJ,GAAA,AAAA,mBAAmB,CAAC,EAClB,OAAO,EAAE,IACX,GAAC,CAED,AAAA,WAAW,CAAC,EACV,KAAK,EAAE,uBAA6D,EACpE,OAAO,EAAE,CAAC,CAAC,IAAoB,GAWhC,CAbD,AAIE,WAJS,CAIP,WAAW,CAAC,EACZ,aAAa,EAAE,IAAkB,EACjC,YAAY,EAAE,CAAC,GAChB,CAPH,AASE,WATS,CASP,UAAU,CAAC,EACX,aAAa,EAAE,CAAC,EAChB,YAAY,EAAE,IAAkB,GACjC,EAdF;;AAoBH,mBAEG;AACH,AAAA,aAAa,CAAC,EACZ,OAAO,EHhMU,IAAI,CGgME,CAAC,EACxB,IAAI,EAAE,QAAQ,GACf;;AAED,AAAA,aAAa,CAAC,EHvKZ,SAAS,EAAC,IAAC,GGyKZ;;AAED,AAAA,kBAAkB,CAAC,EH3KjB,SAAS,EAAC,OAAC,GG6KZ;;AAED,AAAA,UAAU,CAAC,EACT,WAAW,EAAE,CAAC,EACd,UAAU,EAAE,IAAI,GAKjB;;AAPD,AAIE,UAJQ,GAIN,EAAE,CAAC,EACH,aAAa,EHjNE,IAAI,GGkNpB;;AAGH,AAAA,UAAU,CAAC,EACT,SAAS,EHzNQ,IAAuB,EG0NxC,KAAK,EJtNoB,OAAO,GIuNjC;;AAED,AAAA,UAAU,CAAC,EACT,OAAO,EAAE,KAAK,EH9Ld,SAAS,EAAC,MAAC,GGgMZ;;AAID,YAEG;AACH,AAAA,YAAY,CAAC,EACX,aAAa,EHrOI,IAAI,GGsOtB;;AAED,AAAA,WAAW,EACX,aAAa,CAAC,EAAE,CAAC,EH5Mf,SAAS,EAAC,QAAC,EG8MX,cAAc,EAAE,IAAI,EACpB,WAAW,EAAE,IAAI,GAKlB;;AAHC,MAAM,+BANR,GAAA,AAAA,WAAW,EACX,aAAa,CAAC,EAAE,CAAC,EH5Mf,SAAS,EAAC,QAAC,GGoNZ,EAAA;;AAED,AAAA,aAAa,CAAC,EACZ,aAAa,EHpPI,IAAI,GGkRtB;;AA/BD,AAGE,aAHW,CAGX,EAAE,EAHJ,aAAa,CAGP,EAAE,EAHR,aAAa,CAGH,EAAE,EAHZ,aAAa,CAGC,EAAE,EAHhB,aAAa,CAGK,EAAE,EAHpB,aAAa,CAGS,EAAE,CAAC,EAAE,UAAU,EHtPlB,IAAI,GGsPgC;;AAHvD,AAKE,aALW,CAKX,EAAE,CAAC,EH3NH,SAAS,EAAC,OAAC,GGiOV;;AAHC,MAAM,+BAHR,GALF,AAKE,aALW,CAKX,EAAE,CAAC,EH3NH,SAAS,EAAC,IAAC,GGiOV,EAAA;;AAXH,AAaE,aAbW,CAaX,EAAE,CAAC,EHnOH,SAAS,EAAC,QAAC,GGyOV;;AAHC,MAAM,+BAHR,GAbF,AAaE,aAbW,CAaX,EAAE,CAAC,EHnOH,SAAS,EAAC,QAAC,GGyOV,EAAA;;AAnBH,AAqBE,aArBW,CAqBX,EAAE,CAAC,EH3OH,SAAS,EAAC,OAAC,GG6OV;;AAvBH,AAyBE,aAzBW,CAyBX,EAAE,CAAC,EH/OH,SAAS,EAAC,QAAC,GGiPV;;AA3BH,AA4BE,aA5BW,CA4BX,EAAE,CAAC,EHlPH,SAAS,EAAC,SAAC,GGoPV;;AAIH,AAAA,kBAAkB,EAqBlB,WAAW,CArBQ,EACjB,OAAO,EAAE,KAAK,EACd,MAAM,EAAE,MAAM,GAYf;;AAdD,AAGE,kBAHgB,CAGhB,EAAE,EAkBJ,WAAW,CAlBT,EAAE,CAAC,EACD,KAAK,EAAE,IAAI,EACX,MAAM,EAAE,cAAc,GAQvB;;AAbH,AAMI,kBANc,CAGhB,EAAE,CAGE,YAAY,EAelB,WAAW,CAlBT,EAAE,CAGE,YAAY,CAAC,EAAE,YAAY,EAAE,CAAE,GAAE;;AANvC,AAOI,kBAPc,CAGhB,EAAE,CAIA,CAAC,EAcL,WAAW,CAlBT,EAAE,CAIA,CAAC,CAAC,EACA,OAAO,EAAE,KAAK,EACd,OAAO,EAAE,SAAS,EAClB,MAAM,EAAE,GAAG,CAAC,KAAK,CJ9RI,OAAO,GIgS7B;;AAZL,AAWM,kBAXY,CAGhB,EAAE,CAIA,CAAC,CAIG,KAAK,EAUb,WAAW,CAlBT,EAAE,CAIA,CAAC,CAIG,KAAK,CAAC,EAAE,YAAY,EJ/RD,OAAO,GI+Re;;AAOjD,wBAEG;AACH,AAAA,WAAW,CAAC,EACV,aAAa,EH3SI,IAAI,GG8TtB;;AApBD,AAII,WAJO,CAGT,EAAE,CACA,CAAC,EAJL,WAAW,CAGT,EAAE,CACG,GAAG,CAAC,EACL,SAAS,EAAE,IAAI,EACf,UAAU,EAAE,MAAM,EAClB,UAAU,EAAE,UAAU,GACvB;;AARL,AASI,WATO,CAGT,EAAE,CAMA,GAAG,CAAC,EACF,OAAO,EAAE,KAAK,EACd,OAAO,EAAE,KAAmB,EAC5B,MAAM,EAAE,qBAAqB,GAM9B;;AAlBL,AAcM,WAdK,CAGT,EAAE,CAMA,GAAG,AAKA,WAAW,CAAC,EACX,KAAK,EJxTc,OAAO,EIyT1B,MAAM,EAAE,UAAU,GACnB;;AAOP,mBAEG;AACH,MAAM,+BACJ,GAAA,AAAA,SAAS,CAAC,EACR,KAAK,EAAE,sBAA4D,GACpE,EAAA;;ACnVH,uCAAuC;AACvC,uCAAuC;ACEvC,AAAA,KAAK,CAAC,GAAG,CAAC,EACF,OAAO,EAAE,KAAK,EAEd,cAAc,EAAE,GAAG,EACnB,WAAW,EAAE,IAAI,EACjB,YAAY,EAAE,IAAI,GACzB;;AAED,AAAA,GAAG,AAAA,MAAM,CAAA,EACH,OAAO,EAAE,iBAAiB,EAC1B,cAAc,EAAE,mBAAmB,GACxC;;AAED,AAAA,KAAK,CAAC,UAAU,CAAC,EACf,UAAU,EAAE,MAAM,EAClB,SAAS,EAAE,KAAK,EAChB,UAAU,EAAE,MAAM,EAClB,KAAK,EAAE,UAAU,GAClB;;AAED,AAAA,aAAa,CAAC,EACN,sBAAsB,EAAE,sBAAsB,EAC9C,cAAc,EAAE,6BAA6B,EAC7C,WAAW,EAAE,gHAAgH,GACpI;;AAGD,AAAA,aAAa,CAAC,CAAC,EAAE,aAAa,CAAC,EAAE,CAAC,EAC/B,SAAS,EAAE,IAAI,EACf,KAAK,EAAE,OAAO,GAChB;;AAED,AAAA,UAAU,CAAA,EACN,WAAW,EAAE,MAAM,GACtB;;AAGD,AAAA,EAAE,CAAC,EACC,UAAU,EAAC,iBAAiB,GAC/B;;AAED,AAAA,EAAE,CAAC,EACC,UAAU,EAAC,eAAe,GAC7B;;AAED,AAAA,EAAE,EAAE,EAAE,CAAC,EACH,UAAU,EAAC,iBAAiB,GAC/B;;AAED,AAAA,CAAC,CAAC,EACE,UAAU,EAAC,eAAe,EAC1B,aAAa,EAAC,eAAe,GAChC;;AAED,AAAA,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,EAAE,CAAC,EACX,WAAW,EAAE,iBAAiB,EAC9B,aAAa,EAAC,iBAAiB,GAIlC;;AAND,AAGI,EAHF,CAGE,IAAI,EAHJ,EAAE,CAGF,IAAI,EAHA,EAAE,CAGN,IAAI,EAHI,EAAE,CAGV,IAAI,CAAC,EACH,SAAS,EAAE,IAAI,GAChB;;AAGL,AAAA,GAAG,CAAC,EACA,aAAa,EAAC,iBAAiB,GAClC;;AAGD,AAAA,WAAW,CAAC,EAAE,UAAU,EAAE,gBAAgB,GAAI;;AAE9C,AACE,EADA,CACA,EAAE,EADJ,EAAE,CACI,EAAE,CAAC,EACP,UAAU,EAAC,iBAAiB,EAC5B,aAAa,EAAC,iBAAiB,GAC9B;;AAJH,AAKE,EALA,CAKA,sBAAsB,CAAC,EACrB,KAAK,EAAE,OAAe,EACtB,SAAS,EAAE,IAAI,EACf,UAAU,EAAC,iBAAiB,EAC5B,aAAa,EAAC,iBAAiB,GAChC;;AAMF,AAAA,OAAO,AAAA,YAAY,CAAA,AAAA,IAAC,AAAA,EAAM,OAAO,EAAE,KAAK,CAAC,EACxC,OAAO,EAAE,eAAe,GACzB;;AAED,AAAA,OAAO,AAAA,YAAY,CAAA,GAAK,EAAA,AAAA,IAAC,AAAA,GAAO,OAAO,EAAE,KAAK,CAAC,EAC7C,OAAO,EAAE,gBAAgB,GAC1B;;AAGD,AAAA,qBAAqB,CAAC,EACpB,MAAM,EAAC,YAAY,GACpB;;AAGD,AAAA,UAAU,CAAC,EACT,SAAS,EAAE,eAAe,GAI3B;;AALD,AAEE,UAFQ,CAER,CAAC,EAFH,UAAU,CAEL,EAAE,CAAA,EACH,SAAS,EAAE,eAAe,GAC5B;;AAIF,AACE,kBADgB,CAChB,SAAS,EFsMX,WAAW,CEtMT,SAAS,CAAC,EACR,KAAK,EAAE,eAAe,EACtB,MAAM,EAAE,eAAe,GACxB;;AAKH,AAAA,YAAY,CAAC,EACX,OAAO,EAAE,CAAC,EACV,YAAY,EAAE,OAAO,EACrB,oBAAoB,EAAE,YAAY,EAClC,mBAAmB,EAAE,6BAA6B,EAClD,gBAAgB,EAAE,6BAA6B,EAC/C,eAAe,EAAE,6BAA6B,GAC/C;;AAED,AAAA,EAAE,CAAC,KAAK,CAAC,YAAY,EACrB,EAAE,CAAC,KAAK,CAAC,YAAY,EACrB,EAAE,CAAC,KAAK,CAAC,YAAY,EACrB,EAAE,CAAC,KAAK,CAAC,YAAY,EACrB,EAAE,CAAC,KAAK,CAAC,YAAY,EACrB,EAAE,CAAC,KAAK,CAAC,YAAY,CAAC,EACpB,OAAO,EAAE,CAAC,GACX;;AAID,AAAA,cAAc,CAAC,EACb,UAAU,EAAE,iBAAiB,EAC7B,aAAa,EAAE,iBAAiB,EAChC,SAAS,EAAE,IAAI,GAChB;;AAGD,AAAA,gBAAgB,EAAE,UAAU,CAAA,EAC1B,UAAU,EAAE,gBAAgB,EAC5B,aAAa,EAAE,gBAAgB,EAC/B,SAAS,EAAE,IAAI,GAChB;;AAED,AAAA,iBAAiB,CAAC,EAChB,UAAU,EAAE,gBAAgB,EAC5B,aAAa,EAAE,gBAAgB,EAC/B,KAAK,EAAE,OAAO,EACd,SAAS,EAAE,IAAI,GAChB;;AAGD,AAAA,mBAAmB,CAAC,EAClB,SAAS,EAAE,cAAc,EACzB,YAAY,EAAE,OAAO,EACrB,OAAO,EAAE,GAAG,GACb;;AACD,AAAA,mBAAmB,CAAC,EAClB,KAAK,EAAC,OAAkB,CAAC,UAAU,EACnC,SAAS,EAAE,eAAe,GAC3B;;AAGD,AAAA,kBAAkB,CAAC,EAAC,WAAW,EAAE,MAAM,GAAG;;AAC1C,AAAA,yBAAyB,CAAC,EAAC,cAAc,EAAE,IAAI,GAAG;;AAClD,AAAA,yBAAyB,CAAC,oBAAoB,CAAC,EAC7C,SAAS,EAAE,IAAI,EACf,KAAK,EAAE,OAAO,GACf;;AACD,AAAA,sBAAsB,CAAC,EAAC,KAAK,EAAE,MAAM,GAAG;;AACxC,AAAA,yBAAyB,CAAC,CAAC,CAAC,EAAC,OAAO,EAAE,KAAK,EAAE,KAAK,EAAE,IAAI,GAAG;;AAC3D,AAAA,yBAAyB,CAAC,CAAC,CAAC,KAAK,EAAE,yBAAyB,CAAC,CAAC,CAAC,KAAK,CAAC,EAAC,eAAe,EAAE,IAAI,GAAG;;AAC9F,AAAA,yBAAyB,CAAC,CAAC,CAAC,KAAK,CAAC,oBAAoB,CAAC,EAAC,eAAe,EAAE,SAAS,GAAG;;AAErF,AAAA,uBAAuB,CAAC,EACtB,KAAK,EAAE,OAAkB,EACzB,SAAS,EAAE,IAAI,GAChB;;AAED,AAAA,sBAAsB,CAAC,EACrB,KAAK,EAAE,IAAI,EACX,SAAS,EAAE,IAAI,EACf,UAAU,EAAC,iBAAiB,EAC5B,aAAa,EAAC,iBAAiB,GAChC;;AACD,AAAA,wBAAwB,CAAC,EACvB,KAAK,EAAE,OAAO,EACd,WAAW,EAAC,IAAI,GACjB;;AAED,AAAA,KAAK,CAAC,EACJ,WAAW,EAAE,MAAM,EACnB,SAAS,EAAE,IAAI,EACf,SAAS,EAAE,IAAI,EACf,MAAM,EAAC,IAAI,GAIZ;;AARD,AAKE,KALG,CAKH,EAAE,CAAA,EACA,UAAU,EAAE,MAAM,CAAA,UAAW,GAC9B;;EAID,AAAF,iBAAmB,CAAC,EAClB,KAAK,EAAE,IAAI,EACX,MAAM,EAAE,IAAI,GACb;;EACC,AAAF,uBAAyB,CAAC,EACxB,MAAM,EAAE,GAAG,EACX,MAAM,EAAE,GAAG,CAAC,KAAK,CAAC,gBAAgB,EAClC,eAAe,EAAE,WAAW,EAC5B,qBAAqB,EAAE,GAAG,EAC1B,gBAAgB,EAAE,OAAO,EACzB,kBAAkB,EAAE,KAAK,CAAE,IAAG,CAAE,IAAG,CAAC,GAAG,CAAC,mBAAmB,EAAE,KAAK,CAAC,GAAG,CAAC,GAAG,CAAC,GAAG,CAAC,mBAAmB,GACnG;;EACC,AAAF,wBAA0B,CAAC,EACzB,KAAK,EAAE,CAAC,EACR,MAAM,EAAE,CAAC,EACT,OAAO,EAAE,IAAI,GACd;;EACC,AAAF,wBAA0B,CAAC,EACzB,gBAAgB,EAAE,WAAW,GAC9B;;AAGD,AACE,YADU,AAAA,sBAAsB,CAChC,GAAG,CAAA,EACD,WAAW,EAAE,QAAQ,GACtB;;AAYH,AAAA,SAAS,AAAA,OAAO,CAAA,EACd,KAAK,EAAE,IAAI,EACX,MAAM,EAAE,IAAI,GACb;;AAED,AAAA,UAAU,CAAC,EACT,MAAM,EAAE,qBAAqB,GAC9B;;AAED,AAAA,GAAG,CAAC,IAAI,CAAC,EACP,SAAS,EAAE,eAAe,GAC3B;;AD3PD,uCAAuC;AACvC,uCAAuC;AEUvC,AAAA,gBAAgB,GAAG,mBAAmB,CAAC,EACnC,WAAW,EAAE,cAAc,EAC3B,WAAW,EAAE,eAAe,GAC/B;;CAGD,AAAA,AAAA,KAAC,EAAO,WAAW,AAAlB,EAAmB,GAAK,CAAA,mBAAmB,EAAE,GAAG,GACjD,AAAA,KAAC,EAAO,WAAW,AAAlB,EAAmB,GAAK,CAAA,mBAAmB,EAAE,IAAI,CAAC,EAC/C,gBAAgB,EArBM,OAAO,CAqBc,UAAU,EACrD,KAAK,EApBO,OAAO,GAqBtB;;AAED,AAAA,gBAAgB,GAAG,mBAAmB,CAAC,IAAI,CAAC,EAAE,gBAAgB,EAAE,gBAAgB,GAAI;;AACpF,AAAA,gBAAgB,GAAG,mBAAmB,CAAC,GAAG,CAAE,EAAE,gBAAgB,EAAE,gBAAgB,GAAI;;AAIpF,AAAA,WAAW,CAAC,GAAG,EAAE,WAAW,CAAC,GAAG,CAAC,EAC7B,aAAa,EAAC,iBAAiB,EAC/B,UAAU,EAAC,iBAAiB,EAC5B,cAAc,EAAC,YAAY,EAC3B,WAAW,EAAC,YAAY,EACxB,UAAU,EAAE,kBAAkB,EAC9B,sBAAsB,EAAE,WAAW,EACnC,cAAc,EAAE,kBAAkB,EAClC,WAAW,EAAE,oEAAoE,EACjF,aAAa,EAAE,GAAG,EAClB,SAAS,EAAE,IAAI,GAClB;;AACD,AAAA,YAAY,CAAC,GAAG,EAAE,YAAY,CAAC,GAAG,CAAC,EAC/B,aAAa,EAAC,eAAe,EAC7B,UAAU,EAAC,eAAe,EAC1B,cAAc,EAAC,YAAY,EAC3B,WAAW,EAAC,YAAY,GAC3B;;AACD,AAAA,WAAW,CAAC,GAAG,CAAC,EACZ,WAAW,EAAE,oBAAoB,GACpC;;AACD,AAAA,YAAY,CAAC,GAAG,CAAC,EACb,WAAW,EAAE,cAAc,EAC3B,WAAW,EAAE,eAAe,EAC5B,SAAS,EAAE,IAAI,GAClB;;AAED,AAAA,UAAU,CAAC,KAAK,CAAC,EAAE,KAAK,EAAE,IAAI,GAAI;;AAElC,6IAQG;AAEH,AAAA,UAAU,CAAC,EACP,UAAU,EAtEY,OAAO,CAsEQ,UAAU,EAC/C,KAAK,EArEO,OAAO,CAqEG,UAAU,GA8InC;;AAhJD,AAGI,UAHM,CAGN,GAAG,EAHP,UAAU,CAGD,IAAI,CAAC,EACT,UAAU,EAzEW,OAAO,EA0E5B,KAAK,EAxEM,OAAO,EAyElB,SAAS,EAAE,IAAI,GACf;;AAPL,AASI,UATM,CASN,IAAI,EATR,UAAU,CAUN,EAAE,EAVN,UAAU,CAWN,GAAG,EAXP,UAAU,CAYN,GAAG,EAZP,UAAU,CAaN,GAAG,EAbP,UAAU,CAcN,GAAG,EAdP,UAAU,CAeN,GAAG,EAfP,UAAU,CAgBN,GAAG,EAhBP,UAAU,CAiBN,GAAG,EAjBP,UAAU,CAkBN,GAAG,EAlBP,UAAU,CAmBN,GAAG,EAnBP,UAAU,CAoBN,GAAG,EApBP,UAAU,CAqBN,GAAG,EArBP,UAAU,CAsBN,GAAG,EAtBP,UAAU,CAuBN,GAAG,CAAC,EACC,KAAK,EAAC,OAAkB,GAC5B;;AAzBL,AA2BI,UA3BM,CA2BN,GAAG,CAAC,EACF,KAAK,EAhGD,OAAO,GAiGZ;;AA7BL,AA+BI,UA/BM,CA+BN,IAAI,EA/BR,UAAU,CAgCN,EAAE,EAhCN,UAAU,CAiCN,EAAE,EAjCN,UAAU,CAkCN,EAAE,EAlCN,UAAU,CAmCN,EAAE,EAnCN,UAAU,CAoCN,GAAG,EApCP,UAAU,CAqCN,GAAG,EArCP,UAAU,CAsCN,GAAG,EAtCP,UAAU,CAuCN,GAAG,EAvCP,UAAU,CAwCN,GAAG,EAxCP,UAAU,CAyCN,GAAG,EAzCP,UAAU,CA0CN,GAAG,EA1CP,UAAU,CA2CN,GAAG,EA3CP,UAAU,CA4CN,GAAG,EA5CP,UAAU,CA6CN,GAAG,EA7CP,UAAU,CA8CN,GAAG,EA9CP,UAAU,CA+CN,GAAG,EA/CP,UAAU,CAgDN,GAAG,EAhDP,UAAU,CAiDN,GAAG,EAjDP,UAAU,CAkDN,GAAG,EAlDP,UAAU,CAmDN,GAAG,EAnDP,UAAU,CAoDN,GAAG,EApDP,UAAU,CAqDN,EAAE,EArDN,UAAU,CAsDN,GAAG,CAAC,EACF,KAAK,EA1HK,OAAO,EA2HjB,gBAAgB,EA7HI,OAAO,CA6HgB,UAAU,GACtD;;AAzDL,AA2DI,UA3DM,CA2DN,EAAE,CAAC,EACE,WAAW,EAAE,IAAI,EACjB,KAAK,EAAE,OAAkB,GAChC;;AA9DF,AAgEI,UAhEM,CAgEN,GAAG,CAAC,EACF,eAAe,EAAE,SAAS,GAC3B;;AAlEL,AAoEI,UApEM,CAoEN,GAAG,CAAC,EACF,UAAU,EAAE,MAAM,GACnB;;AAtEL,AAwEI,UAxEM,CAwEN,EAAE,EAxEN,UAAU,CAyEN,GAAG,EAzEP,UAAU,CA0EN,GAAG,EA1EP,UAAU,CA2EN,IAAI,EA3ER,UAAU,CA4EN,GAAG,CAAC,EACF,KAAK,EA/ID,OAAO,GAgJZ;;AA9EL,AAgFI,UAhFM,CAgFN,GAAG,CAAC,EACF,KAAK,EAAE,IAAI,GACZ;;AAlFL,AAoFI,UApFM,CAoFN,GAAG,EApFP,UAAU,CAqFN,GAAG,EArFP,UAAU,CAsFN,GAAG,EAtFP,UAAU,CAuFN,GAAG,EAvFP,UAAU,CAwFN,GAAG,EAxFP,UAAU,CAyFN,GAAG,EAzFP,UAAU,CA0FN,GAAG,EA1FP,UAAU,CA2FN,GAAG,EA3FP,UAAU,CA4FN,GAAG,CAAC,EACF,KAAK,EA9JD,OAAO,GA+JZ;;AA9FL,AAgGI,UAhGM,CAgGN,GAAG,EAhGP,UAAU,CAiGN,GAAG,EAjGP,UAAU,CAkGN,GAAG,EAlGP,UAAU,CAmGN,GAAG,EAnGP,UAAU,CAoGN,GAAG,EApGP,UAAU,CAqGN,GAAG,EArGP,UAAU,CAsGN,GAAG,EAtGP,UAAU,CAuGN,GAAG,CAAC,EACF,UAAU,EAAE,MAAM,GACnB;;AAzGL,AA2GI,UA3GM,CA2GN,GAAG,EA3GP,UAAU,CA4GN,GAAG,EA5GP,UAAU,CA6GN,GAAG,EA7GP,UAAU,CA8GN,GAAG,CACF,EACC,KAAK,EA1KM,OAAkB,GA2K9B;;AAjHL,AAmHI,UAnHM,CAmHN,EAAE,EAnHN,UAAU,CAoHN,EAAE,EApHN,UAAU,CAqHN,GAAG,EArHP,UAAU,CAsHN,GAAG,EAtHP,UAAU,CAuHN,GAAG,EAvHP,UAAU,CAwHN,GAAG,EAxHP,UAAU,CAyHN,GAAG,EAzHP,UAAU,CA0HN,GAAG,EA1HP,UAAU,CA2HN,GAAG,CAAC,EACF,KAAK,EA1LD,OAAO,GA2LZ;;AA7HL,AA+HI,UA/HM,CA+HN,GAAG,CAAC,EACC,KAAK,EA1LG,OAAkB,GA2L9B;;AAjIL,AAmII,UAnIM,CAmIN,EAAE,EAnIN,UAAU,CAoIN,GAAG,EApIP,UAAU,CAqIN,GAAG,EArIP,UAAU,CAsIN,GAAG,EAtIP,UAAU,CAuIN,GAAG,EAvIP,UAAU,CAwIN,GAAG,EAxIP,UAAU,CAyIN,GAAG,CAAC,EACF,KAAK,EAvMC,OAAO,GAwMd;;AA3IL,AA6II,UA7IM,CA6IN,GAAG,CAAC,EACF,KAAK,EA1MF,OAAO,GA2MX;;AAGL,AAAA,CAAC,CAAC,IAAI,CAAA,EACJ,SAAS,EAAE,IAAI,GAChB"
-}
\ No newline at end of file
diff --git a/assets/js/search-data.json b/assets/js/search-data.json
deleted file mode 100755
index d8d20a5..0000000
--- a/assets/js/search-data.json
+++ /dev/null
@@ -1,352 +0,0 @@
-{
-
-
- "post0": {
- "title": "2023: A year in review",
- "content": ". . While there haven’t been any blog posts in 2023 :wink:, it has been a productive year for the Policy Simulation Library (PSL) community and PSL Foundation! . We’ve continued to serve our mission through education and outreach efforts. We hosted 13 Demo Days in 2023, including presentations from individuals at the Congressional Budget Office, Allegheny County, NOAA, Johns Hopkins, QuantEcon, the City of New York, and other institutions. Archived videos of the Demo Days are available on our YouTube Channel. . In addition, we hosted an in person workshop at the National Tax Association’s annual conference in November. This event featured the PolicyEngine-US project and was lead by Max Ghenis and Nikhil Woodruff, co-founders of PolicyEngine. Attendees included individuals from the local area (Denver) and conference attendees, who represented academia, government, and think tanks. Max and Nikhil provided an overview of PolicyEngine and then walked attendees through a hands-on exercise using the PolicyEngine US tool, having them write code to generate custom plots in a Google Colab notebook. It was a lot of fun – and the pizza was decent too! . Speaking of PolicyEngine, this fiscally-sponsored project of PSL Foundation had a banner year in terms of fundraising and development. The group received several grants in 2023 and closed out the year with a large grant from Arnold Ventures. They also wrote an NSF grant proposal which they are waiting to hear back from. The group added an experienced nonprofit executive, Leigh Gibson, to their team. Leigh provides support with fundraising and operations, and she’s been instrumental in these efforts. In terms of software development, the PolicyEngine team has been able to greatly leverage volunteers (more than 60!) with Pavel Makarchuk coming on as Policy Modeling Manager to help coordinate these efforts. With their community, PolicyEngine has codified numerous US state tax and benefit policies and has developed a robust method to create synthetic data for use in policy analysis. Be on the lookout for a lot more from them in 2024. . QuantEcon, another fiscally sponsored project, has also made tremendous contributions to open source economics in 2023. Most importantly, they ran a very successful summer school in West Africa. In addition, they have continued make key contributions to software tools useful for teaching and training economics tools. These include Jupyteach, which Spencer Lyon shared in our Demo Day series. With their online materials, textbooks, and workshops around the world, QuantEcon is shaping how researchers and policy analysts employ economic tools to solve real-world problems. . PSL Foundation added a third fiscally sponsored project, Policy Change Index (PCI) in 2023. PCI was founded by Weifeng Zhong, a Senior Research Fellow at the Mercatus Center at George Mason University, and uses natural language processing and machine learning to predict changes in policy among autocratic regimes. PCI has had a very successful start with PCI-China, predicting policy changes in China, and PCI-Outbreak, predicting the extent of true COVID-19 case counts in China during the pandemic. Currently, they are extending their work to include predictive indices for Russia, North Korea, and Iran. PSL-F is excited for the opportunity to help support this important work. . Other cataloged projects have continued to be widely used in 2023. To note a few of these use cases, the United Nations has partnered with Richard Evans and Jason DeBacker, maintainers of OG-Core, to help bring the modeling platform to developing countries they are assisting. Tax Foundation’s Capital Cost Recovery model has been updated to 2023 and used in their widely cited 2023 Tax Competitiveness Index. And the Tax-Calculator and TaxData projects both continue to used by think tanks and researchers. . As 2023 comes to a close, we look forward to 2024. We’ll be launching a new PSLmodels.org website soon. And there’ll be many more events – we hope you join in. . From all of us at the PSL, best wishes for a happy and healthy New Year! . Resources: . PSL Models | PSL Foundation | PSL Twitter Feed | PSL YouTube channel | PSL on Open Collective | PSL Shop for PSL branded merchandise | .",
- "url": "https://blog.pslmodels.org/2023-year-in-review",
- "relUrl": "/2023-year-in-review",
- "date": " • Dec 28, 2023"
- }
-
-
-
-
- ,"post1": {
- "title": "2022: A year in review",
- "content": ". . This has been another successful year for the Policy Simulation Library, whose great community of contributors continue to make innovative advances in open source policy analysis, and for the PSL Foundation, which supports the Library and its community. We are so thankful for all those who have made financial or technical contributions to the PSL this year! In this blog post, I want to take this time at the end of the year to reflect on a few of the highlights from 2022. . PolicyEngine, a PSL Foundation fiscally-sponsored project, launched PolicyEngine US in April and has since seen many use cases of the model (check out the PolicyEngine year-in-review here). PolicyEngine had begun by leveraging the OpenFisca platform, but has transitioned to their own-maintained PolicyEngine Core. PolicyEngine Core and their related projects (such as PolicyEngine US and PolicyEngine UK) already meet all the criteria set forth by the Policy Simulation Library. Keep an eye out for lots more excellent tax and benefit policy analysis tools from PolicyEngine in 2023 and beyond! . PSL Foundation has partnered with QuantEcon, acting as a fiscal sponsor for their projects that provide training and training materials for economic modeling and econometrics using open source tools. QuantEcon ran a massive open online class in India that had more than 1000 registrants in summer of 2022. They also ran an online course for over 100 students from universities in Africa in 2022. Further, with the funding received through their partnership with PSL Foundation, QuantEcon will continue these efforts in 2023 with a planned, in-person course in India. . PSL hosted its first in-person workshop in March. The workshop focused on open source tools for tax policy analysis including Tax-Calculator, Cost-of-Capital-Calculator, OG-USA, and PolicyEngine US. The PSL event was, appropriately enough, hosted at the MLK Memorial Library in DC. We filled the space with 30 attendees from think tanks, consultancies, and government agencies. The workshop was a great success and we look forward to hosting more in-person workshops in the future. . PSL’s bi-weekly Demo Day series continued throughout 2022, with 13 Demo Days this year. In these, we saw a wide array of presenters from institutions such as the Federal Reserve Bank of Atlanta, PolicyEngine, Tax Foundation, National Center for Children in Poverty, IZA Institute of Labor Economics, Channels, the University of South Carolina, the Center for Growth and Opportunity, and the American Enterprise Institute. You can go back and rewatch any of these presentations on YouTube. . It’s been a fantastic year and we expect even more from the community and PSL Foundation in 2023. PSL community members continue to interact several times each week on our public calls. Check out the events page and join us in the New Year! . From all of us at the PSL, best wishes for a happy and healthy New Year! . Resources: . PSL Foundation | PSL Twitter Feed | PSL YouTube channel | PSL on Open Collective | PSL Shop for PSL branded merchandise | .",
- "url": "https://blog.pslmodels.org/2022-year-in-review",
- "relUrl": "/2022-year-in-review",
- "date": " • Dec 31, 2022"
- }
-
-
-
-
- ,"post2": {
- "title": "Demo Day: How does targeted cash assistance affect incentives to work?",
- "content": ". In this week’s Demo Day, I shared my paper published at the Center for Growth and Opportunity in June. “How does targeted cash assistance affect incentives to work?” analyzed a program Mayor Sumbul Siddiqui proposed in Cambridge, Massachusetts to provide $500 per month for 18 months to all families with dependents and income below 200% of the poverty line. . Targeted programs like these are common in guaranteed income pilots, and in some enacted policies, and I find that it would cost-effectively reduce poverty: if expanded to Massachusetts, it would cost $1.2 billion per year and cut child poverty 42%. . However, that targeting comes at a cost. Using the OpenFisca US microsimulation model (supported by the Center for Growth and Opportunity and cataloged by the Policy Simulation Library), I find that the program would deepen an existing welfare cliff at 200% of the poverty line. For example, a family of four would lose over $19,000 total—$9,000 from the cash assistance and $10,000 from other benefits—once they earn a dollar above 200% of the poverty line (about $55,000). To recover those lost benefits, they would have to earn an additional $26,000, a range I call the “earnings dead zone”. . My presentation reviews these trends in both slides and the PolicyEngine US app for computing the impacts of tax and benefit policy. For example, I show how repealing the SNAP emergency allotment would smooth out welfare cliffs, while reducing resources available to low-income families, and how a universal child allowance avoids work disincentives while less cost-effectively reducing poverty. . Policymakers face trade-offs between equity and efficiency, and typically labor supply responses consider marginal tax rates. With their infinite marginal tax rates, welfare cliffs are a less explored area, even though they surface in several parts of the tax and benefit system. This paper makes a start, but more research is yet to be done. .",
- "url": "https://blog.pslmodels.org/demo-day-cambridge-cash-assistance",
- "relUrl": "/demo-day-cambridge-cash-assistance",
- "date": " • Jul 14, 2022"
- }
-
-
-
-
- ,"post3": {
- "title": "Demo Day: Getting Started with GitHub",
- "content": ". Git and GitHub often present themselves as barriers to entry to would-be contributors to PSL projects, even for those who are otherwise experienced with policy modeling. But these tools are critical to collaboration on open source projects. In the Demo Day video linked above, I cover some of the basics to get set up and begin contributing to an open source project. . There are four steps I outline: . Create a “fork” of the repository you are interested in. A fork is a copy of the source code that resides on GitHub (i.e., in the cloud). A fork gives you control over a copy of the source code. You will be able to merge in changes to the code on this fork, even if you don’t have permissions to do so with the original repository. | “Clone” the fork. Cloning will download a copy of the source code from your fork onto your local machine. But cloning is more than just downloading the source code. It will include the version history of the code and automatically create a link between the local files and the remote files on your fork. | Configure your local files to talk to both your fork (which has a default name of origin) and the original repository you forked from (which typically has the default name of upstream). Do this by using your command prompt or terminal to navigate to the directory you just cloned. From there, run: git remote add upstream URL_to_original_repo.git . And check that this worked by giving the command: . git remote -v . | If things worked, you should see URLs to your fork and the upstream repository with “(fetch)” and “(push)” by them More info on this is in the Git docs. . Now that you have copies of the source code on your fork and on your local machine, you are ready to begin contributing. As you make changes to the source code, you’ll want to work on development branches. Branches are copies of the code. Ideally, you keep your “main” (or “master”) branch clean (i.e., your best version of the code) and develop the code on branches. When you’ve completed the development work (e.g., adding a new feature) you will them merge this into the “main” branch. | I hope this helps you get started contributing to open source projects. Git and GitHub are valuable tools and there is lots more to learn, but these basics will get you going. For more information, see the links below. If you want to get started working with a project in the Library, feel free to reach out to me through the relevant repo (@jdebacker on GitHub) or drop into a PSL Community Call (dates on the PSL Calendar). . Resources: . PSL Git Tutorial | Git Basics | .",
- "url": "https://blog.pslmodels.org/demo-day-github",
- "relUrl": "/demo-day-github",
- "date": " • Jun 28, 2022"
- }
-
-
-
-
- ,"post4": {
- "title": "Demo Day: Analyzing tax competitiveness with Cost-of-Capital-Calculator",
- "content": ". In the Demo Day video shared here, I show how to use open source tools to analyze international corporate tax competitiveness. The two main tools illustrated are the Cost-of-Capital-Calculator (CCC), a model to compute measures of the tax burden on new investments, and Tax Foundation’s International Tax Competitiveness Index (ITCI). . Tax Foundation has made many helpful resources available online. Their measures of international business tax policy are a great example of this. The ICTI outputs and inputs are all well documented, with source code to reproduce results available on GitHub. . I plug Tax Foundation’s country-by-country data into CCC functions using it’s Python API. Because CCC is designed to flexibly take array or scalar data, operating on rows of tabular data, such as that in the ITCI, is relatively straight-forward. The Google Colab notebook I walk through in this Demo Day, can be a helpful example to follow if you’d like to do something similar to this with the Tax Foundation data - or your own data source. From the basic building blocks there (reading in data, calling CCC functions), you can extend the analysis in a number of ways. For example adding additional years of data (Tax Foundation posts their data back to 2014), modifying economic assumptions, or creating counter-factual policy experiments across sets of countries. . If you find this example useful, or have questions or suggestions about this type of analysis, please feel free to reach out to me. . Resources: . Colab Notebook | Tax Foundation International Tax Competitiveness Index 2021 | GitHub repo for Tax Foundation ITCI data | Cost-of-Capital-Calculator documentation | .",
- "url": "https://blog.pslmodels.org/demo-day-ccc-international",
- "relUrl": "/demo-day-ccc-international",
- "date": " • Apr 18, 2022"
- }
-
-
-
-
- ,"post5": {
- "title": "Demo Day: Modeling taxes and benefits with the PolicyEngine US web app",
- "content": ". PolicyEngine is a nonprofit that builds free, open-source software to compute the impact of public policy. After launching our UK app in October 2021, we’ve just launched our US app, which calculates households’ federal taxes and several benefit programs, both under current law and under customizable policy reforms. . In this Demo Day, I provide background on PolicyEngine and demonstrate how to use PolicyEngine US (a Policy Simulation Library cataloged model) to answer a novel policy question: . How would doubling both (a) the Child Tax Credit and (b) the Supplemental Nutrition Assistance Program (SNAP) net income limit affect a single parent in California with $1,000 monthly rent and $50 monthly broadband costs? . By bringing together tax and benefit models into a web interface, we can answer this question quickly without programming experience, as well as an unlimited array of questions like it. The result is a table breaking down the household’s net income by program, as well as graphs of net income and marginal tax rates as the household’s earnings vary. . I close with a quick demo of PolicyEngine UK, which adds society-wide results like the impact of reforms on the budget, poverty, and inequality, as well as contributed policy parameters. We’re planning to bring those features to PolicyEngine US, along with state tax and benefit programs in all 50 states, over the next two years (if not sooner). . Feel free to explore the app and reach out with any questions at max@policyengine.org. . Resources: . PolicyEngine US | Presentation slides | PolicyEngine blog post on launching PolicyEngine US | .",
- "url": "https://blog.pslmodels.org/demo-day-policyengine-us",
- "relUrl": "/demo-day-policyengine-us",
- "date": " • Apr 12, 2022"
- }
-
-
-
-
- ,"post6": {
- "title": "Policy Simulation Library DC Workshop: Open source tools for analyzing tax policy",
- "content": ". . The Policy Simulation Library is hosting a workshop in Washington, DC on March 25 on open source tools for the analysis of tax policy. Participants will learn how to use open source models from the Library for revenue estimation, distributional analysis, and to simulate economic impacts of tax policy. The workshop is intended to be a hands-on experience and participants can expect to leave with the required software, documentation, and knowledge to continue using these tools. All models in the workshop are written in the Python programming language–familiarity with the language is helpful, but not required. . Workshop Schedule: . 8:15-8:45a: Breakfast | 8:45-9:00a: Introduction | 9:00-9:50a: Using Tax-Calculator for revenue estimation and distributional analysis (Matt Jensen) | 10:00-10:50a: Estimating effective tax rates on investment with Cost-of-Capital-Calculator (Jason DeBacker) | 11:00-11:50a: Macroeconomic modeling of fiscal policy with OG-Core and OG-USA (Richard W. Evans) | noon-1:00p: Lunch and demonstration of PolicyEngine (Max Ghenis) | . The workshop will be held at the Martin Luther King Jr. Memorial Library in Washington, DC. Participants are expected to arrive by 8:30am and the program will conclude at 1:00pm. Breakfast and lunch will be provided. PSL Foundation is sponsoring the event and there is no cost to attend. Attendance is limited to 30 in order to make this a dynamic and interactive workshop. . To register, please use this Google Form. Registration will close March 11. Participants will be expected to bring a laptop to the workshop where they can interact with the software in real time with the instructors. Registered participants will receive an email before the event with a list of software to install before the workshop. . Please feel free to share this invitation with your colleagues. . Questions about the workshop can be directed to Jason DeBacker at jason.debacker@gmail.com. .",
- "url": "https://blog.pslmodels.org/DC-workshop",
- "relUrl": "/DC-workshop",
- "date": " • Mar 3, 2022"
- }
-
-
-
-
- ,"post7": {
- "title": "2021: A year in review",
- "content": ". . As 2021 winds down, I wanted to take a few minutes to reflect on the Policy Simulation Library’s efforts over the past year. With an amazing community of contributors, supporters, and users, PSL has been able to make a real impact in 2021. . The library saw two new projects achieve “cataloged” status: Tax Foundation’s Capital Cost Recovery model and the Federal Reserve Bank of New York’s DSGE.jl model. Both models satisfy all the the PSL criteria for transparency and reproducibility. Both are also written entirely in open source software: the Capital Cost Recovery model is in R and the DSGE model in Julia. . An exciting new project to join the Library this year is PolicyEngine. PolicyEngine is building open source tax and benefit mircosimulation models and very user-friendly interfaces to these models. The goal of this project is to take policy analysis to the masses through intuitive web and mobile interfaces for policy models. The UK version of the PolicyEngine app has already seen use from politicians interested in reforming the tax and benefit system in the UK. . Another excellent new addition to the library is the Federal-State Tax Project. This project provides data imputation tools to allow for state tax data that are representative of each state as well as federal totals. These datasets can then be used in microsimulation models, such as Tax-Calculator to study the impact of federal tax laws across the states. Matt Jensen and Don Boyd have published several pieces with these tools, including in State Tax Notes . PSL Foundation became an official business entity in 2021. While still awaiting a letter of determination for 501(c)(3) status from the IRS, PSL Foundation was able to raise more than $25,000 in the last few months of 2021 to support open source policy analysis! . PSL community members continued to interact several times each week in our public calls. The PSL Shop was launched in 2021 so that anyone can get themselves some PSL swag (with some of each purchase going back to the PSL Foundation to support the Library). In addition, PSL hosted 20 Demo Day presentations from 11 different presenters! These short talks covered everything from new projects to interesting applications of some of the first projects to join the Library, as well as general open source tools. . As in past years, PSL cataloged and incubating models were found to be of great use in current policy debates. Whether it was the ARPA, Biden administration proposals to expand the CTC, or California’s Basic Income Bill, the accessibility and ability to reproduce results from these open source projects has made them a boon to policy analysts. . We are looking forward to a great 2022! We expect the Library to continue to grow, foresee many interesting and helpful Demo Days, and are planning a DC PSL Workshop for March 2022. We hope to see you around these or other events! . Best wishes from PSL for a happy and healthy New Year! . Resources: . PSL Foundation | PSL Twitter Feed | PSL YouTube channel | PSL on Open Collective | .",
- "url": "https://blog.pslmodels.org/2021-year-in-review",
- "relUrl": "/2021-year-in-review",
- "date": " • Dec 28, 2021"
- }
-
-
-
-
- ,"post8": {
- "title": "Demo Day: Using synthimpute for data fusion",
- "content": ". Suppose a policy analyst sought to estimate the impact of a policy that changed income tax rates and benefit rules while also adding a progressive wealth tax. The standard approach is to use a microsimulation model, where the rules are programmed as code, and then to run that program over a representative sample of households. Unfortunately, no single US government survey captures all the households characteristics needed to analyze this policy; in particular, the reliable tax and benefit information lies in surveys like the Current Population Survey (CPS), while wealth lies in the Survey of Consumer Finances (SCF). . Assuming the analyst wanted to start with the CPS, they have several options to estimate wealth for households to levy the progressive wealth tax. Two typical approaches include: . Linear regression, predicting wealth from other household characteristics common to the CPS and SCF. | Matching, in which each CPS household is matched with the most similar household in the SCF. | Neither of these approaches, however, aim to estimate the distribution of wealth conditional on other characteristics. Linear regression explicitly estimates the mean prediction, but that could miss the tails of wealth from whom most of the wealth tax revenue will be collected. . Instead, the analyst could apply quantile regression to estimate the distribution of wealth conditional on other characteristics, and then measure the effectiveness of the estimation using quantile loss. . In this Demo Day, I present the concepts of microsimulation, imputation, and quantile loss to motivate the synthimpute Python package I’ve developed with my PolicyEngine colleague Nikhil Woodruff. In an experiment predicting wealth on a holdout set from the SCF, my former colleague Deepak Singh and I found that random forests significantly outperform OLS and matching for quantile regression, and this is the approach applied in synthimpute for both data fusion and data synthesis. The synthimpute API will be familiar to users of scikit-learn and statsmodels , with the difference being that synthimpute ‘s rf_impute function returns a random value from the predicted distribution; it can also skew the predictions to meet a target total. . We’ve used synthimpute to fuse data for research reports at the UBI Center and to enhance the PolicyEngine web app for UK tax and benefit simulation, and we welcome new users and contributors. Feel free to explore the repository or contact me with questions at max@policyengine.org. . Resources: . synthimpute package on GitHub | Presentation slides | UBI Center report on land value taxation in the UK, using synthimpute to impute land value from the UK Wealth and Assets Survey to the Family Resources Survey | PolicyEngine UK carbon tax example, using synthimpute to impute carbon emissions from the Living Costs and Food Survey to the Family Resources Survey | Notebook comparing random forests to matching and other techniques using a holdout set from the US Survey of Consumer Finances | My blog post on quantile regression for Towards Data Science, which laid the groundwork for synthimpute | .",
- "url": "https://blog.pslmodels.org/demo-day-synthimpute",
- "relUrl": "/demo-day-synthimpute",
- "date": " • Dec 8, 2021"
- }
-
-
-
-
- ,"post9": {
- "title": "Demo Day: The OG-Core platform",
- "content": ". The OG-Core model is a general equilibrium, overlapping generations (OG) model suitable for evaluating fiscal policy. Since the work of Alan Auerbach and Laurence Kotlikoff in the 1980s, this class of model has become a standard in the macroeconomic analysis of tax and spending policy. This is for good reason. OG models are able to capture the impacts of taxes and spending in the short and long run, examine incidence of policy across generations of people (not just short run or steady state analysis of a cross-section of the economy), and capture important economic dynamics (e.g., crowding out effects of deficit-financed policy). . In the PSL Demo Day presentation linked above, I cover the basics of OG-Core: its history, its API, and how country-specific models can use OG-Core as a dependency. In brief, OG-Core provides a general overlapping generations framework, from which parameters can be calibrated to represent particular economies. Think of it this way: an economic model is just a set of parameters plus a system of equations. OG-Core spells out all of the equations to represent an economy with heterogeneous agents, production and government sectors, open economy options, and detailed policy rules. OG-Core also includes default values for all parameters, along with parameter metadata and parameter validation rules. A country specific application is then just a particular parameterization of the general OG-Core model. . As an example of a country-specific application, one can look at the OG-USA model. This model provides a calibration of OG-Core to the United States. The source code in that project allows one to go from raw data sources to the estimation and calibration procedures used to determine parameter values representing the United States, to parameter values in formats suitable for use in OG-Core. Country-specific models like OG-USA include (where available) links to microsimulation models of tax and spending programs to allow detailed microdata of actual and counterfactual policies to inform the net tax-transfer functions used in the OG-Core model. For those interested in building their own country-specific model, the OG-USA project provides a good example to work from. . We encourage you to take a look at OG-Core and related projects. New contributions and applications are always welcome. If you have questions or comments, reach out through the relevant repositories on Github to me, @jdebacker, or Rick Evans, @rickecon. . Resources: . OG-Core documentation | OG-USA documentation package for unit testing in Python | Tax-Calculator documentation | OG-UK repository | OpenFisca-UK repository | Slides from the Demo Day presentation | .",
- "url": "https://blog.pslmodels.org/demo-day-og-core",
- "relUrl": "/demo-day-og-core",
- "date": " • Nov 1, 2021"
- }
-
-
-
-
- ,"post10": {
- "title": "Demo Day: Deploying apps on Compute Studio",
- "content": ". . Compute Studio (C/S) is a platform for publishing and sharing computational models and data visualizations. In this demo day, I show how to publish your own project on C/S using the new automated deployments feature. You can find an in depth guide to publishing on C/S in the developer docs. . C/S supports two types of projects: models and data visualizations. Models are fed some inputs and return a result. Data visualizations are web servers backed by popular open-source libraries like Bokeh, Dash, or Streamlit. Models are good for long-running processes and producing archivable results that can be shared and returned to easily. Data visualizations are good for highly interactive and custom user experiences. . Now that you’ve checked out the developer docs and set up your model or data-viz, you can head over to the C/S publishing page https://compute.studio/new/ to publish your project. Note that this page is still very much under construction and may look different in a few weeks. . . Next, you will be sent to the second stage in the publish flow where you will provide more details on how to connect your project on C/S: . . Clicking “Connect App” will take you to the project home page: . . Go to the “Settings” button in the top-right corner and this will take you to the project dashboard where you can modify everything from the social preview of your project to the amount of compute resources it needs: . . The “Builds” link in the sidebar will take you to the builds dashboard where you can create your first build: . . It’s time to create the first build. You can do so by clicking “New Build”. This will take you to the build status page. While the build is being scheduled, the page will look like this: . . You can click the “Build History” link and it will show that the build has been started: . . The build status page should be updated at this point and will look something like this: . . C/S automated deployments are built on top of Github Actions. Unfortunately, the logs in Github Actions are not available through the Github API until after the workflow is completely finished. The build status dashboard will update as the build progresses and once it’s done, you will have full access to the logs from the build. These will contain outputs from installing your project and the outputs from your project’s tests. . In this case, the build failed. We can inspect the logs to see that an import error caused the failure: . . . I pushed an update to my fork of Tax-Cruncher on Github and restarted the build by clicking “Failure. Start new Build”. The next build succeeded and we can click “Release” to publish the project: . . The builds dashboard now shows the two builds: . . Finally, let’s go run our new model: . . It may take a few seconds for the page to load. This is because the model code and all of its dependencies are being loaded onto the C/S servers for the first time: . . The steps for publishing a data visualization are very similar. The main idea is that you tell C/S what Python file your app lives in and C/S will know how to run it given your data visualization technology choice. .",
- "url": "https://blog.pslmodels.org/demo-day-cs-auto-deploy",
- "relUrl": "/demo-day-cs-auto-deploy",
- "date": " • Sep 20, 2021"
- }
-
-
-
-
- ,"post11": {
- "title": "Demo Day: Unit testing for open source projects",
- "content": ". Unit testing is the testing of individual units or functions of a software application. This differs from regression testing that focuses on the verification of final outputs. Instead, unit testing tests each smallest testable component of your code. This helps to more easily identify and trace errors in the code. . Writing unit tests is good practice, though not one that’s always followed. The biggest barrier to writing unit tests is that doing so takes time. You might wonder “why am I testing code that runs?” But there are a number benefits to writing unit tests: . It ensures that the code does what you expect it to do | You’ll better understand what your code is doing | You will reduce time tracking down bugs in your code | . Often, writing unit tests will save you time in the longer run because it reduces debugging time and because it forces you to think more about what your code does, which often leads to the development of more efficient code. And for open source projects, or projects with many contributors, writing unit tests is important in reducing the likelihood that errors are introduced into your code. This is why the PSL catalog criteria requires projects to provide at least some level of unit testing. . In the PSL Demo Day video linked above, I illustrate how to implement unit tests in R using the testthat package. There are essentially three steps to this process: . Create a directory to put your testing script in, e.g., a folder called tests | Create one or more scripts that define your tests. Each test is represented as a call of the test_that function and contain an statement that will evaluate as true or false (e.g., you may use the expect_equal function to verify that a function returns expected values given certain inputs). | You will want to use test in the name of these tests scripts as well as something descriptive of what is tested. | . | Create a script that will run your tests. Here you’ll need to import the testthat package and you’ll need to call the script(s) you are testing to load their functions. | Then you’ll use the test_dir function to pass the directory in which the script(s) you created in Step 2 reside. | . | Check out the video to see examples of how each of these steps is executed. I’ve also found this blog post on unit tests with testthat to be helpful. . Unit testing in Python seems to be more developed and straightforward with the excellent pytest package. While pytest offers many options for parameterizing tests, running tests in parallel, and more, the basic steps remain the same as those outlined above: . Create a directory for your test modules (call this folder tests as pytest will look for that name). | Create test modules that define each test Tests are defined much like any other function in Python, the but will involve some assertion statement is triggered upon test failure. | You will want to use test in the name of these tests modules as well as something descriptive of what is tested. | . | You won’t need to create a script to run your tests as with testthat, but you may create a pytest.ini file to customize your tests options. | That’s about it to get started writing unit tests for your code. PSL cataloged projects provide many excellent examples of a variety of unit tests, so search them for examples to build from. In a future Demo Day and blog post, we’ll talk about continuous integration testing to help get even more out of you unit tests. . Resources: . testthat package for unit testing in R | pytest package for unit testing in Python | PSL catalog criteria | Unit tests for the capital-cost-recovery model | .",
- "url": "https://blog.pslmodels.org/demo-day-unit-testing",
- "relUrl": "/demo-day-unit-testing",
- "date": " • Aug 9, 2021"
- }
-
-
-
-
- ,"post12": {
- "title": "Demo Day: Constructing tax data for the 50 states",
- "content": ". Federal income tax reform impacts can vary dramatically across states. The cap on state and local tax deductions (SALT) is a well-known example, but other policies also have differential effects because important tax-relevant features vary across states such as the income distribution, relative importance of wage, business, and retirement income, and family size and structure. Analyzing how policy impacts vary across states requires data that faithfully represent the characteristics of the 50 states. . This Demo Day described a method and software for constructing state weights for microdata files that (1) come as close as possible to targets for individual states, while (2) ensuring that the state weights for each tax record sum to its national weight. The latter objective ensures that the sum of state impacts for a tax reform equals the national impact. . This project developed state weights for a data file with more than 200,000 microdata records. The weighted data file comes within 0.01% of desired values for more than 95% of approximately 10,000 targets. . The goal of the slides and video was to enable a motivated Python-skilled user of the PSL TaxData and Tax-Calculator projects to reproduce project results: 50-state weights for TaxData’s primary output, the puf.csv microdata file (based primarily on an IRS Public Use File), using early-stage open-source software developed in the project. Thus, the demo is technical and focused on nuts and bolts. . The methods and software can also be used to: . Create geographic-area weights for other microdata files | Apportion state weights to Congressional Districts or counties, if suitable targets can be developed | Create state-specific microdata files suitable for modeling state income taxes | . The main topics covered in the slides and video are: . Creating national and state targets from IRS summary data | Preparing a national microdata file for state weighting | Approaches to constructing geographic weights | Running software that implements the Poisson-modeling approach used in the project | Measures of quality of the results | .",
- "url": "https://blog.pslmodels.org/demo-day-constructing-tax-data-for-the-50-states",
- "relUrl": "/demo-day-constructing-tax-data-for-the-50-states",
- "date": " • Jul 16, 2021"
- }
-
-
-
-
- ,"post13": {
- "title": "Demo Day: Using the TaxBrain Python API",
- "content": ". . The TaxBrain project was primarily created to serve as the backend of the Tax-Brain web-application. But at its core, TaxBrain is a Python package that greatly simplifies tax policy analysis. For this PSL Demo-Day, I demonstrated TaxBrain’s capabilities as a standalone package, and how to use it to produce high-level summaries of the revenue impacts of proposed tax policies. The Jupyter Notebook from the presentation can be found here. . TaxBrain’s Python API allows you to run a full analysis of income tax policies in just three lines of code: . from taxbrain import TaxBrain tb = TaxBrain(START_YEAR, END_YEAR, use_cps=True, reform=REFORM_POLICY) tb.run() . Where START_YEAR and END_YEAR are the first and last years, respectively, of the analysis; use_cps is a boolean indicator that you want to use the CPS-based microdata file prepared for use with Tax-Calculator; and REFORM_POLICY is either a JSON file or Python dictionary that specifies a reform suitable for Tax-Calculator. The forthcoming release of TaxBrain will also include a feature that allows you to perform a stacked revenue analysis as well. The inspiration for this feature was presented by Jason DeBacker in a previous demo-day. . Once TaxBrain has been run, there are a number of methods and functions included in the package to create tables and plots to summarize the results. I used the Biden 2020 campaign proposal in the demo and the resulting figures are below. The first is a “volcano plot” that makes it easy to see the magnitude of the change in tax liability individuals across the income distribution face. Each dot represents a tax unit, and the x and y variables can be customized based on the user’s needs. . . The second gives a higher-level look at how taxes change in each income bin. It breaks down what percentage of each income bin faces a tax increase or decrease, and the size of that change. . . The final plot shown in the demo simply shows tax liabilities by year over the budget window. . . The last feature I showed was TaxBrain’s automated reports. TaxBrain uses saved results and an included report template to write a report summarizing the findings of your simulation. The reports include tables and figures similar to what you may find in similar write ups by the Joint Committee on Taxation or Tax Policy Center including a summary of significant changes caused by the reform, and all you need is one line of code: . report(tb, name='Biden Proposal', outdir='biden', author='Anderson Frailey') . The above code will save a PDF copy of the report in a directory called biden along with PNG files for each of the graphs created and the raw Markdown text used for the report, which you can then edit as needed if you would like to add content to the report that is not already included. Screenshots of the default report are included below. . . There are of course downsides to using TaxBrain instead of Tax-Calculator directly. Specifically, it’s more difficult, and sometimes impossible, to perform custom tasks like modeling a feature of the tax code that hasn’t been added to Tax-Calculator yet or advanced work with marginal tax rates. But for day-to-day tax modeling, the TaxBrain Python package can significantly simply any workflow. . Resources: . Tax-Brain GitHub repo | Tax-Brain Documentation | .",
- "url": "https://blog.pslmodels.org/demo-day-tax-brain-python-api",
- "relUrl": "/demo-day-tax-brain-python-api",
- "date": " • Jun 14, 2021"
- }
-
-
-
-
- ,"post14": {
- "title": "Demo Day: Updating Jupyter Book documentation with GitHub Actions",
- "content": ". Open source projects must maintain clear and up-to-date documentation in order to attract users and contributors. Because of this, PSL sets minimum standards for documentation among cataloged projects in its model criteria. A recent innovation in executable books, Jupyter Book, has provided an excellent format for model documentation and has been widely adopted by PSL projects (see for example OG-USA, Tax-Brain, Tax-Calculator). . Jupyter Book allows one to write documents with executable code and text together, as in Jupyter notebooks. But Jupyter Book pushes this further by allowing documents with multiple sections, better integration of TeX for symbols and equations, BibTex style references and citations, links between sections, and Sphinx integration (for auto-built documentation of model APIs from source code). Importantly for sharing documentation, Jupyter Books can easily be compiled to HTML, PDF, or other formats. Portions of a Jupyter Book that contain executable code can be downloaded as Jupyter Notebooks or opened in Google Colab or binder . The Jupyter Book documentation is excellent and will help you get started creating your “book” (tip: pay close attention to formatting details, including proper whitespace). What I do here is outline how you can easily deploy your documentation to the web and keep it up-to-date with your project. . I start from the assumption that you have the source files to build your Jupyter Book checked into the main branch of your project (these maybe yml , md , rst , ipynb or other files). For version control purposes and to keep your repo trim, you generally don’t want to check the built documentation files to this branch (tip: consider adding the folder these files will go to (e.g., /_build to your .gitignore ). When these files are in place and you can successfully build your book locally, it’s time for the first step. . Step 1: Add two GH Actions to your project’s workflow: . An action to check that your documentation files build without an error. I like to run this on each push to a PR. The action won’t hang on warnings, but will fail if your Jupyter Book doesn’t build at all. An example of this action from the OG-USA repo is here: | name: Check that docs build on: [push, pull_request] jobs: build: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v2 # If you're using actions/checkout@v2 you must set persist-credentials to false in most cases for the deployment to work correctly. with: persist-credentials: false - name: Setup Miniconda uses: conda-incubator/setup-miniconda@v2 with: activate-environment: ogusa-dev environment-file: environment.yml python-version: 3.7 auto-activate-base: false - name: Build # Build Jupyter Book shell: bash -l {0} run: | pip install jupyter-book pip install sphinxcontrib-bibtex==1.0.0 pip install -e . cd docs jb build ./book . To use this in your repo, you’ll just need to change a few settings such as the name of the environment and perhaps the Python version and path to the book source files. Note that in the above yml file sphinxcontrib-bibtex is pinned. You maybe able to unpin this, but OG-USA needed this pin for documentation to compile property due to changes in the jupyter-book and sphinxcontrib-bibtex packages. . An action that builds and deploys the Jupyter Book to GH Pages. The OG-USA project uses the deploy action from James Ives in this action. This is something that you will want to run when PRs are merged into your main branch so that the documentation is kept up-to-date with the project. To modify this action for your repo, you’ll need to change the repo name, the environment name, and potentially the Python version, branch name, and path to the book source files. | Step 2: Once the action in (2) above is run, your compiled Jupyter Book docs will be pushed to a gh-pages branch in your repository (the action will create this branch for you if it doesn’t already exist). At this point, you should be able to see your docs at the url https://GH_org_name.github.io/Repo_name . But it probably won’t look very good until you complete this next step. To have your Jupyter Book render on the web as you see it on your machine, you will want to push and merge an empty file with the name .nojekyll into your repo’s gh-pages branch. . That’s it! With these actions, you’ll be sure that your book continues to compile and a new version will be published to the web with with each merge to your main branch, ensuring that your documentation stays up-to-date. . Some additional tips: . Use Sphinx to document your projects API. By doing so you’ll automate an important part of your project’s documentation – as long as the docstrings are updated when the source code is, the Jupyter Book you are publishing to the web will be kept in sync with no additional work needed. | You can have your gh-pages-hosted documentation point to a custom URL. | Project maintainers should ensure that docs are updated with PRs that are relevant (e.g., if the PR changes an the source code affecting a user interface, then documentation showing example usage should be updated) and help contributors make the necessary changes to the documentation source files. | .",
- "url": "https://blog.pslmodels.org/demo-day-jupyter-book-deploy",
- "relUrl": "/demo-day-jupyter-book-deploy",
- "date": " • May 17, 2021"
- }
-
-
-
-
- ,"post15": {
- "title": "Demo Day: Producing stacked revenue estimates with the Tax-Calculator Python API",
- "content": ". It’s often useful to be able to identify the effects of specific provisions individually and not just the overall impact of a proposal with many provisions. Indeed, when revenue estimates of tax law changes are reported (such as this JCT analysis of the American Rescue Plan Act of 2021), they are typically reported on a provision-by-provision basis. Finding the provision-by-provision revenue estimates is cumbersome with the Tax-Brain web application both because it’s hard to iterate over many provisions and because the order matters when stacking estimates, so that one needs to keep this order in mind as parameter values are updated for each additional provision in a full proposal. . In the PSL Demo Day on April 5, 2021, I show how to use the Python API of Tax-Calculator to efficiently produce stacked revenue estimates. In fact, after some initial setup, this can be done with just 12 lines of code (plus a few more to make the output look nice). The Google Colab notebook that illustrates this approach can be found at this link, but here I’ll walk through the four steps that are involved: . Divide up the full proposal into strings of JSON text that contain each provision you want to analyze. My example breaks up the Biden 2020 campaign proposal into seven provisions, but this is illustrative and you can make more or less provisions depending on the detail you would like to see. | Create a dictionary that contains, as its values, the JSON strings. A couple notes on this. First, the dictionary keys should be descriptive of the provisions as they will become the labels for each provision in the final table of revenue estimates we produce. Second, order matters here. You’ll want to be sure the current law baseline is first (the value for this will be an empty dictionary). Then you specify the provisions. The order you specify will likely affect your revenue estimates from a given provision (for instance, expanding/restricting a deduction has a larger revenue effect when rates are higher), but there are not hard and fast rules on the “right” order. Traditionally, rate changes are stacked first and tax expenditures later in the order. | Iterate over this dictionary. With a dictionary of provisions in hand, we can write a “for loop” to iterate over the provision, simulating the Tax-Calculator model at each step. Note that when the Policy class object in Tax-Calculator is modified, it only needs to be told the changes in tax law parameters relative to its current state. In other words, when we are stacking provisions, estimating the incremental effect of each, you can think of the Policy object having a baseline policy that is represented by the current law baseline plus all provisions that have been analyzed before the provision at the current iteration. The Policy class was created in this way so that one can easily represent policy changes, requiring the user to only input the set of parameters that are modified, not every single parameter’s value under the hypothetical policy. But this also makes it parsimonious to stack provisions as we are doing here. Notice that the JSON strings for each provision (created in Step 1) can be specified independent of the stacking order. We only needed to slice the full set of proposals into discrete chunks, we didn’t need to worry about creating specifications of cumulative policy changes. | Format output for presentation. After we’ve run a Tax-Calculator simulation for the current law baseline plus each provision (and each year in the budget window), we’ve got all the output we need. With this output, we can quickly create a table that will nicely present our stacked revenue estimate. One good check to do here is to create totals across all provisions and compare this to the simulated revenue effects of running the full set of proposals in one go. This check helps to ensure that you didn’t make an error in specifying your JSON strings. For example, it’s easy to leave out one or more provisions, especially if there are many. | I hope this provides a helpful template for your own analysis. Note that one can modify this code in several useful ways. For example, within the for-loops, the Behavioral-Responses can be called to produce revenue estimates that take into account behavioral feedback. Or one could store the individual income tax and payroll tax revenue impacts separately (rather than return the combined values as in the example notebook). Additional outputs (even the full set of microdata after each provision is applied) can be stored for even more analysis. . In the future, look for Tax-Brain to add stacked revenue estimates to its capabilities. It’ll still be important for users to carve up their full list of policy changes into sets of provisions as we did in Steps 1 and 2 above, but Tax-Brain will then take care of the rest behind the scenes. . Resources: . Colab Notebook with example | Biden campaign reform file in PSL Examples | .",
- "url": "https://blog.pslmodels.org/demo-day-stacked-revenue-estimates",
- "relUrl": "/demo-day-stacked-revenue-estimates",
- "date": " • Apr 5, 2021"
- }
-
-
-
-
- ,"post16": {
- "title": "Demo Day: Stitching together apps on Compute Studio",
- "content": ". In Demo Day 8, I talked about connecting multiple apps on Compute Studio with PSL Stitch. The source code for PSL stitch can be found in this repository. . Stitch is composed of three components: . A python package that can be run like a normal Python package. | A RESTful API built with FastAPI that is called remotely to create simulations on Compute Studio. | A GUI built with ReactJS that makes calls to the REST API to create and monitor simulations. | . One of the cool things about this app is that it uses ParamTools to read the JSON files under the hood. This means that it can read links to data in other Compute Studio runs, files on GitHub, or just plain JSON. Here are some example parameters: . policy parameters: cs://PSLmodels:Tax-Brain@49779/inputs/adjustment/policy | tax-cruncher parameters: {"sage": [{"value": 25}]} | business tax parameters: {"CIT_rate": [{"value": 0.25, "year": 2021}]} | . After clicking run, three simulations will be created on Compute Studio and the app will update as soon as the simulations have finished: . . . Once they are done, the simulations are best viewed and interacted with on Compute Studio, but you can still inspect the JSON response from the Compute Studio API: . . I created this app to show that it’s possible to build apps on top of the Compute Studio API. I think PSL Stitch is a neat example of how to do this, but I am even more excited to see what others build next. . Also, this is an open source project and has lots of room for improvement. If you are interested in learning web technologies related to REST APIs and frontend development with JavaScript, then this project could be a good place to start! . Resources: . PSL Stitch | Source code | Compute Studio API Docs | .",
- "url": "https://blog.pslmodels.org/demo-day-cs-api-stitch",
- "relUrl": "/demo-day-cs-api-stitch",
- "date": " • Mar 8, 2021"
- }
-
-
-
-
- ,"post17": {
- "title": "Demo Day: Moving policy reform files from Tax-Brain to Tax-Cruncher",
- "content": "Check out the video: . . Show notes: . I demonstrate how to move a policy reform file from Tax-Brain to Tax-Cruncher using the Compute.Studio API. See the Demo C/S simulation linked below for text instructions that accompany the video. . Resources: . Demo C/S simulation with instructions | .",
- "url": "https://blog.pslmodels.org/demo-day-taxbrain-to-taxcruncher",
- "relUrl": "/demo-day-taxbrain-to-taxcruncher",
- "date": " • Mar 2, 2021"
- }
-
-
-
-
- ,"post18": {
- "title": "Demo Day: Contributing to PSL projects",
- "content": ". In the most recent PSL Demo Day, I illustrate how to contribute to PSL projects. The open source nature of projects in the PSL catalog allows anyone to contribute. The modularity of the code, coupled with robust testing, means that one can bite off small pieces that help improve the models and remain confident those changes work as expected. . To begin the process of finding where to contribute to PSL projects, I advise looking through the PSL GitHub Organization to see what projects interest you. Once a project of interest is identified, looking over the open “Issues” can provide a sense of where model maintainers and users are looking for help (see especially the “Help Wanted” tags). It is also completely appropriate to create a new Issue to express interest in helping and ask for direction on where that might best be done given your experience and preferences. . When you are ready to begin to contribute to a project, you’ll want to fork and clone the GitHub repository to help you get the files on your local machine and ready for you to work with. Many PSL projects outline the detailed steps to get you up and running. For example, see the Tax-Calculator Contributor Guide, which outlines the step-by-step process for doing this and confirming that everything works as expected on your computer. . After you are set up and ready to begin modifying source code for the PSL project(s) you’re interested in contributing to, you can reference the PSL-incubating Git-Tutorial project that provides more details on the Git workflow followed by most PSL projects. . As you contribute, you may want to get more involved in the community. A couple ways to do this are to join any of the PSL community events, all of which are open to the public, and to post to the PSL Discourse Forums. These are great places to meet community members and ask questions about how and where to best contribute. . I hope this helps you get started as a PSL contributor – we look forward to getting you involved in making policy analysis better and more transparent! . Resources: . PSL Git-Tutorial | PSL community events | PSL Discourse Forums | Tax-Calculator Contributor Guide | PSL GitHub Organization | .",
- "url": "https://blog.pslmodels.org/demo-day-contributing-psl",
- "relUrl": "/demo-day-contributing-psl",
- "date": " • Mar 2, 2021"
- }
-
-
-
-
- ,"post19": {
- "title": "Demo Day: Running the scf and microdf Python packages in Google Colab",
- "content": ". For Monday’s PSL Demo Day, I showed how to use the scf and microdf PSL Python packages from the Google Colab web-based Jupyter notebook interface. . The scf package extracts data from the Federal Reserve’s Survey of Consumer Finances, the canonical source of US wealth microdata. scf has a single function: load(years, columns) , which then returns a pandas DataFrame with the specified column(s), each record’s survey weight, and the year (when multiple years are requested). . The microdf package analyzes survey microdata, such as that returned by the scf.load function. It offers a consistent paradigm for calculating statistics like means, medians, sums, and inequality statistics like the Gini index. Most functions are structured as follows: f(df, col, w, groupby) where df is a pandas DataFrame of survey microdata, col is a column(s) name to be summarized, w is the weight column, and groupby is the column(s) to group records in before summarizing. . Using Google Colab, I showed how to use these packages to quickly calculate mean, median, and total wealth from the SCF data, without having to install any software or leave the browser. I also demonstrated how to use the groupby argument of microdf functions to show how different measures of wealth inequality have changed over time. Finally, I previewed some of what’s to come with scf and microdf : imputations, extrapolations, inflation, visualization, and documentation, to name a few priorities. . Resources: . Slides | Demo notebook in Google Colab | Simulation from the demonstration | scf GitHub repo | microdf GitHub repo | microdf documentation | .",
- "url": "https://blog.pslmodels.org/demo-day-scf-microdf",
- "relUrl": "/demo-day-scf-microdf",
- "date": " • Jan 29, 2021"
- }
-
-
-
-
- ,"post20": {
- "title": "Demo Day: The OG-USA macroeconomic model of U.S. fiscal policy",
- "content": ". . . In this PSL Demo Day, I demonstrate how to use the open source OG-USA macroeconomic model of U.S. fiscal policy. Jason DeBacker and I (Richard Evans) have been the core maintainers of this project and repository for the last six years. This demo is organized into the following sections. The YouTube webinar linked above took place on January 11, 2021. . A brief note about value of the PSL community | Description of the OG-USA model | Using OG-USA on Compute Studio | . Brief note about the value of the PSL community . The Policy Simulation Library is a decentralized organization of open source policy models. The Policy Simulation Library GitHub organization houses many open source repositories, each of which represents a curated policy project by a diverse group of maintainers. The projects that have met the highest standards of best practices and documentation are designated as psl-cataloged , while newer projects that are in earlier stages are designated as psl-incubating . The philosophy and goal of the PSL environment is to make policy modeling open and transparent. It also allows more collaboration and cross-project contributions and interactions. . The Policy Simulation Library group has been holding these PSL Demo Day webinars since the end of 2020. The video of each webinar is available on the Policy Simulation Library YouTube channel. These videos are a great resource for learning the different models available in the PSL community, how the models interact, how to contribute to them, and what is on the horizon in their development. Also excellent in many of the PSL Demo Day webinars is a demonstration of how to use the models on the Compute Studio web application platform. . I have been a participant in and contributor to the PSL community since its inception. I love economic policy modeling. And I learned how sophisticated and complicated economic policy models can be. And any simulation can have hundreds of underlying assumptions, some of which may not be explicitly transparent. I think models that are used for public policy analysis have a philosophical imperative to be open source. This allows others to verify results and test sensitivity to assumptions. . Another strong benefit of open source modeling is that it is fundamentally apolitical. With proprietary closed-source policy models, an outside observer might criticize the results of the model based on the perceived political biases of the modeler or the sponsoring organization. With open-source models, a critic can be redirected to the underlying assumptions, structure, and content of the model. This is constructive criticism and debate that moves the science foreward. In the current polarized political environment in the U.S., open-source modeling can provide a constructive route for bipartisan cooperation and the democratization of computational modeling. Furthermore, open-source modeling and workflow encourages the widest forms of collaboration and contributions. . Description of OG-USA model . OG-USA is an open-source overlapping generations, dynamic general equilibrium, heterogeneous agent, macroeconomic model of U.S. fiscal policy. The GitHub repository for the OG-USA source code is github.com/PSLmodels/OG-USA. This repository contains all the source code and instructions for loading and running OG-USA and all of its dependencies on your local machine. We will probably do another PSL Demo Day on how to run OG-USA locally. This Demo Day webinar is about running OG-USA on the Compute Studio web application. See Section “Using OG-USA on Compute.Studio” below. . As a heterogeneous agent macroeconomic model, OG-USA allows for distributional analyses at the individual and firm level. That is, you can simulate the model and answer questions like, “How will an increase in the top three personal income tax rates affect people of different ages and income levels?” Microsimulation models can answer these types of distributional analysis questions as well. However, the difference between a macroeconomic model and a microsimulation model is that the macroeconomic models can simulate how each of those individuals and firms will respond to a policy change (e.g., lower labor supply or increased investment demand) and how those behavioral responses will add up and feed back into the macroeconomy (e.g., the effect on GDP, government revenue, government debt, interest rates, and wages). . OG-USA is a large-scale model and comprises tens of thousands of lines of code. The status of all of this code being publicly available on the internet with all collaboration and updates also public makes this an open source project. However, it is not enough to simply post one’s code. We have gone to great lengths to make in-line comments or “docstring” in the code to clarify the purpose of each function and line of code. For example, look in the OG-USA/ogusa/household.py module. The first function on line 18 is the marg_ut_cons() function. As is described in its docstring, its purpose is to “Compute the marginal utility of consumption.” . These in-code docstrings are not enough. We have also created textbook style OG-USA documentation at pslmodels.github.io/OG-USA/ using the Jupyter Book medium. This form of documentation has the advantage of being in book form and available online. It allows us to update the documentation in the open-source repository so changes and versions can be tracked. It describes the OG-USA API, OG-USA theory, and `OG-USA calibration. As with the model, this documentation is always a work in progress. But being open-source allows outside contributors to help with its updated and error checking. . One particular strength of the OG-USA model I want to highlight is its interaction with microsimulation models to incorporate information about tax incentives faced by the heterogeneous households in the model. We have interfaced OG-USA with microsimulation models in India and in the European Commission. OG-USA ‘s default for modeling the United States is to use the open-source Tax-Calculator microsimulation model, which was described by Anderson Frailey in the last Demo Day of 2020. However, DeBacker and I currently have a project in which we use OG-USA to simulate policies using the Tax Policy Center’s microsimulation model. The way OG-USA interfaces with microsimulation models to incorporate rich tax data is described in the documentation in the calibration chapter entitled, “Tax Functions”. . Using OG-USA on Compute Studio . In the demonstration, I focus on how to run experiments and simulations with OG-USA using the Compute Studio web application platform rather than installing running the model on your local machine. To use OG-USA on this web application, you will need a Compute Studio account. Once you have an account, you can start running any model available through the site. For some models, you will have to pay for the compute time, although the cost of running these models is very modest. However, all Compute Studio simulations of the OG-USA model are currently sponsored by the Open Source Economics Laboratory. This subsidy will probably run out in the next year. But we are always looking for funding for these models. . Once you are signed up and logged in to your Compute Studio account, you can go to the OG-USA model on Compute Studio at compute.studio/PSLmodels/OG-USA. The experiment that we simulated in the demonstration is available at compute.studio/PSLmodels/OG-USA/206. The description at the top of the simulation page describes the changes we made. You can look through the input page by clicking on the “Inputs” tab. We ran the model by clicking the green “Run” button at the lower left of the page. The model took about 5 hours to run, so I pre-computed the results that we discussed in the demo. The outputs of the experiment are available in the “Outputs” tab on the page. I also demonstrated how one can click the “Download Results” button at the bottom of the “Outputs” tab to download more results from the simulation. However, the full set of results is only available by installing and running the OG-USA model simulation on your local machine. . The benefits of the Compute Studio web application are that running the OG-USA model is much easier for the non-expert, and the multiple-hour computation time can be completed on a remote machine in the cloud. . Resources . PSL Demo Day YouTube webinar: “How to use OG-USA” | OG-USA on Compute Studio | Simulation from the demonstration | OG-USA GitHub repo | OG-USA documentation | Tax-Calculator GitHub repo | .",
- "url": "https://blog.pslmodels.org/demo-day-how-to-use-og-usa",
- "relUrl": "/demo-day-how-to-use-og-usa",
- "date": " • Jan 28, 2021"
- }
-
-
-
-
- ,"post21": {
- "title": "Demo Day: Tax-Brain",
- "content": ". For this PSL demo-day I showed how to use the Tax-Brain web-application, hosted on Compute Studio, to analyze proposed individual income tax policies. Tax-Brain integrates the Tax-Calculator and Behavioral-Responses models to make running both static and dynamic analyses of the US federal income and payroll taxes simple. The web interface for the model makes it possible for anyone to run their own analyses without writing a single line of code. . We started the demo by simply walking through the interface and features of the web-app before creating our own sample reform to model. This reform, which to my knowledge does not reflect any proposals currently up for debate, included changes to the income and payroll tax rates, bringing back personal exemptions, modifying the standard deduction, and implementing a universal basic income. . While the model ran, I explained how Tax-Brain validated all of the user inputs, the data behind the model, and how the final tax liability projections are determined. We concluded by looking through the variety of tables and graphs Tax-Brain produces and how they can easily be shared with others. . Resources: . Simulation from the demonstration | Tax-Brain GitHub repo | Tax-Calculator documentation | Behavioral-Responses documentation | .",
- "url": "https://blog.pslmodels.org/demo-day-tax-brain",
- "relUrl": "/demo-day-tax-brain",
- "date": " • Dec 23, 2020"
- }
-
-
-
-
- ,"post22": {
- "title": "2020: A year in review",
- "content": ". . This year has been one to forget! But 2020 did have its bright spots, especially in the PSL community. This post reviews some of the highlights from the year. . The Library was able to welcome two new models to the catalog in 2020: microdf and OpenFisca-UK. microdf provides a number of useful tools for use with economic survey data. OpenFisca-UK builds off the OpenFisca platform, offering a microsimulation model for tax and benefit programs in the UK. . In addition, four new models were added to the Library as incubating projects. The ui-calculator model has received a lot of attention this year in the U.S., as it provides the capability to calculate unemployment insurance benefits across U.S. states, a major mode of delivering financial relief to individuals during the COVID crisis. PCI-Outbreak directly relates to the COVID crisis, using machine learning and natural language processing to estimate the true extent of the COVID pandemic in China. The model finds that actual COVID cases are significantly higher than what official statistics claim. The COVID-MCS model considers COVID case counts and test positivity rates to measure whether or not U.S. communities are meeting certain benchmarks in controlling the spread of the disease. On a lighter note, the Git-Tutorial project provides instruction and resources for learning to use Git and GitHub, with an emphasis on the workflow used by many projects in the PSL community. . The organization surrounding the Policy Simulation Library has been bolstered in two ways. First, we have formed a relationship with the Open Collective Foundation, who is now our fiscal host. This allows PSL to accept tax deductible contributions that will support the efforts of the community. Second, we’ve formed the PSL Foundation, with an initial board that includes Linda Gibbs, Glenn Hubbard, and Jason DeBacker. . Our outreach efforts have grown in 2020 to include the regular PSL Demo Day series and this PSL Blog. Community members have also presented work with PSL models at the PyData Global Conference, the Tax Economists Forum, AEI, the Coiled Podcast, and the Virtual Global Village Podcast. New users will also find a better experience learning how to use and contribute to PSL models as many PSL models have improved their documentation through the use of Jupyter Book (e.g., see the Tax-Calculator documentation). . We love seeing the community around open source policymaking expand and are proud that PSL models have been used for important policy analysis in 2020, including analyzing economic policy responses to the pandemic and the platforms of presidential candidates. We look forward to more progress in 2021 and welcome you to join the effort as a contributor, financially or as an open source developer. . Best wishes from PSL for a happy and healthy new year! . Resources: . PSL Twitter Feed | PSL YouTube | PSL on Open Collective | .",
- "url": "https://blog.pslmodels.org/2020-year-in-review",
- "relUrl": "/2020-year-in-review",
- "date": " • Dec 23, 2020"
- }
-
-
-
-
- ,"post23": {
- "title": "Demo Day: Cost-of-Capital-Calculator Web Application",
- "content": ". In the PSL Demo Day video linked above, I demonstrate how to use the Cost-of-Capital-Calculator (CCC) web application on Compute-Studio. CCC computes various measures of the impact of the tax system on business investment. These include the Hall-Jorgenson cost of capital, marginal effective tax rates, and effective average tax rates (following the methodology of Devereux and Griffith (1999)). . I begin by illustrating the various parameters available for the user to manipulate. These include parameters of the business and individual income tax systems, as well as parameters representing economic assumptions (e.g., inflation rates and nominal interest rates) and parameters dictating financial and accounting policy (e.g., the fraction of financing using debt). Note that all default values for tax policy parameters represent the “baseline policy”, which is defined as the current law policy in the year being analyzed (which itself is a parameter the user can change). Other parameters are estimated using historical data following the methodology of CBO (2014). . Next, I change a few parameters and run the model. In this example, I move the corporate income tax rate up to 28% and lower bonus depreciation for assets with depreciable lives of 20 years or less to 50%. . Finally, I discuss how to interpret output. The web app returns a table and three figures summarizing marginal effective total tax rates on new investments. This selection of output helps give one a sense of the the overall changes, as well as effects across asset types, industries, and type of financing. For the full model output, one can click on “Download Results”. Doing so will download four CSV files contain several measures of the impact of the tax system on investment for very fine asset and industry categories. Users can take these files and create tables and visualizations relevant to their own use case. . Please take the model for a spin and simulate your own reform. If you have questions, comments, or suggestions, please let me know on the PSL Discourse (non-technical questions) or by opening an issue in the CCC GitHub repository (technical questions). . Resources: . Compute Studio simulation used in the demonstration | Cost-of-Capital-Calculator web app | Cost-of-Capital-Calculator documentation | Cost-of-Capital-Calculator GitHub repository | .",
- "url": "https://blog.pslmodels.org/demo-day-cost-of-capital-calculator",
- "relUrl": "/demo-day-cost-of-capital-calculator",
- "date": " • Dec 3, 2020"
- }
-
-
-
-
- ,"post24": {
- "title": "Demo Day: Tax-Cruncher",
- "content": ". For the Demo Day on November 16, I showed how to calculate a taxpayer’s liabilities under current law and under a policy reform with Tax-Cruncher. The Tax-Cruncher web application takes two sets of inputs: a taxpayer’s demographic and financial information and the provisions of a tax reform. . For the first Demo Day example (3:50), we looked at how eliminating the state and local tax (SALT) deduction cap and applying payroll tax to earnings above $400,000 would affect a high earner. In particular, our hypothetical filer had $500,000 in wages, $100,000 in capital gains, and $100,000 in itemizable expenses. You can see the results at Compute Studio simulation #634. . For the second example (17:50), we looked at how expanding the Earned Income Tax Credit (EITC) and Child Tax Credit would impact a family with $45,000 in wages and two young children. You can see the results at Compute Studio simulation #636. . Resources: . Tax-Cruncher | Tax-Cruncher-Biden | .",
- "url": "https://blog.pslmodels.org/demo-day-tax-cruncher",
- "relUrl": "/demo-day-tax-cruncher",
- "date": " • Nov 23, 2020"
- }
-
-
-
-
- ,"post25": {
- "title": "Demo Day: Building policy reform files",
- "content": "Check out the video: . We will host Demo Days every two weeks until the end of the year. You can see our schedule on our events page. . . Show notes: . I demonstrate how to build policy reform files using the Tax-Brain webapp on Compute Studio. (Useful links below.) This is an introductory lesson that ends with a cliffhanger. We don’t run the model. But we do generate an individual income and payroll tax reform file that is compatible with a range of policy simulation models and analytic tools, some designed for policy decision makers, others for taxpayers and benefits recipients interested in assessing their own circumstances. . Beyond individual and payroll tax analysis, the reform file can be used with models that assess pass-through and corporate taxation of businesses, as well as a variety of income benefit programs. A wide range of use cases will occupy future events. . Resources: . Demo C/S simulation | IRS Form 1040 | PSL Catalog | PSL Events | .",
- "url": "https://blog.pslmodels.org/demo-day-creating-reform-files",
- "relUrl": "/demo-day-creating-reform-files",
- "date": " • Nov 18, 2020"
- }
-
-
-
-
- ,"post26": {
- "title": "Introducing the PSL Blog",
- "content": "Our mission at the Policy Simulation Library is to improve public policy by opening up models and data preparation routines for policy analysis. To support and showcase our diverse community of users and developers, we engage across several mediums: a monthly newsletter, a Q&A forum, (now-virtual) meetups, our Twitter feed, our YouTube channel, documentation for models in our catalog, and of course, issues and pull requests on GitHub. . Today, we’re adding a new medium: the PSL Blog. We’ll use this space to share major updates on our catalog, provide tutorials, and summarize events or papers that involve our models. . If you’d like to share your work on our blog, or to suggest content, drop me a line. To follow along, add the PSL blog’s RSS feed or subscribe to our newsletter. . Happy reading, . Max Ghenis . Editor, PSL Blog .",
- "url": "https://blog.pslmodels.org/introducing-psl-blog",
- "relUrl": "/introducing-psl-blog",
- "date": " • Nov 6, 2020"
- }
-
-
-
-
-
-
-
-
- ,"page1": {
- "title": "About",
- "content": "The Policy Simulation Library (PSL) is a collection of models and other software for public-policy decisionmaking. PSL is developed by independent projects that meet standards for transparency and accessibility. The PSL community encourages collaborative contribution and makes the tools it develops accessible to a diverse group of users.1 . This website is powered by fastpages, a blogging platform that natively supports Jupyter notebooks in addition to other formats. ↩ . |",
- "url": "https://blog.pslmodels.org/about/",
- "relUrl": "/about/",
- "date": ""
- }
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- ,"page12": {
- "title": "",
- "content": "Sitemap: {{ “sitemap.xml” | absolute_url }} | .",
- "url": "https://blog.pslmodels.org/robots.txt",
- "relUrl": "/robots.txt",
- "date": ""
- }
-
-
-
-}
\ No newline at end of file
diff --git a/assets/js/search.js b/assets/js/search.js
deleted file mode 100755
index 6dcdcf7..0000000
--- a/assets/js/search.js
+++ /dev/null
@@ -1,296 +0,0 @@
-// from https://github.com/pmarsceill/just-the-docs/blob/master/assets/js/just-the-docs.js#L47
-
-(function (jtd, undefined) {
-
-// Event handling
-
-jtd.addEvent = function(el, type, handler) {
- if (el.attachEvent) el.attachEvent('on'+type, handler); else el.addEventListener(type, handler);
-}
-jtd.removeEvent = function(el, type, handler) {
- if (el.detachEvent) el.detachEvent('on'+type, handler); else el.removeEventListener(type, handler);
-}
-jtd.onReady = function(ready) {
- // in case the document is already rendered
- if (document.readyState!='loading') ready();
- // modern browsers
- else if (document.addEventListener) document.addEventListener('DOMContentLoaded', ready);
- // IE <= 8
- else document.attachEvent('onreadystatechange', function(){
- if (document.readyState=='complete') ready();
- });
-}
-
-// Show/hide mobile menu
-
-// function initNav() {
-// const mainNav = document.querySelector('.js-main-nav');
-// const pageHeader = document.querySelector('.js-page-header');
-// const navTrigger = document.querySelector('.js-main-nav-trigger');
-
-// jtd.addEvent(navTrigger, 'click', function(e){
-// e.preventDefault();
-// var text = navTrigger.innerText;
-// var textToggle = navTrigger.getAttribute('data-text-toggle');
-
-// mainNav.classList.toggle('nav-open');
-// pageHeader.classList.toggle('nav-open');
-// navTrigger.classList.toggle('nav-open');
-// navTrigger.innerText = textToggle;
-// navTrigger.setAttribute('data-text-toggle', text);
-// textToggle = text;
-// })
-// }
-
-
-// Site search
-
-function initSearch() {
- var request = new XMLHttpRequest();
- request.open('GET', '/assets/js/search-data.json', true);
-
- request.onload = function(){
- if (request.status >= 200 && request.status < 400) {
- // Success!
- var data = JSON.parse(request.responseText);
-
-
- lunr.tokenizer.separator = /[\s\-/]+/
-
-
- var index = lunr(function () {
- this.ref('id');
- this.field('title', { boost: 200 });
- this.field('content', { boost: 2 });
- this.field('url');
- this.metadataWhitelist = ['position']
-
- for (var i in data) {
- this.add({
- id: i,
- title: data[i].title,
- content: data[i].content,
- url: data[i].url
- });
- }
- });
-
- searchResults(index, data);
- } else {
- // We reached our target server, but it returned an error
- console.log('Error loading ajax request. Request status:' + request.status);
- }
- };
-
- request.onerror = function(){
- // There was a connection error of some sort
- console.log('There was a connection error');
- };
-
- request.send();
-
- function searchResults(index, data) {
- var index = index;
- var docs = data;
- var searchInput = document.querySelector('.js-search-input');
- var searchResults = document.querySelector('.js-search-results');
-
- function hideResults() {
- searchResults.innerHTML = '';
- searchResults.classList.remove('active');
- }
-
- jtd.addEvent(searchInput, 'keydown', function(e){
- switch (e.keyCode) {
- case 38: // arrow up
- e.preventDefault();
- var active = document.querySelector('.search-result.active');
- if (active) {
- active.classList.remove('active');
- if (active.parentElement.previousSibling) {
- var previous = active.parentElement.previousSibling.querySelector('.search-result');
- previous.classList.add('active');
- }
- }
- return;
- case 40: // arrow down
- e.preventDefault();
- var active = document.querySelector('.search-result.active');
- if (active) {
- if (active.parentElement.nextSibling) {
- var next = active.parentElement.nextSibling.querySelector('.search-result');
- active.classList.remove('active');
- next.classList.add('active');
- }
- } else {
- var next = document.querySelector('.search-result');
- if (next) {
- next.classList.add('active');
- }
- }
- return;
- case 13: // enter
- e.preventDefault();
- var active = document.querySelector('.search-result.active');
- if (active) {
- active.click();
- } else {
- var first = document.querySelector('.search-result');
- if (first) {
- first.click();
- }
- }
- return;
- }
- });
-
- jtd.addEvent(searchInput, 'keyup', function(e){
- switch (e.keyCode) {
- case 27: // When esc key is pressed, hide the results and clear the field
- hideResults();
- searchInput.value = '';
- return;
- case 38: // arrow up
- case 40: // arrow down
- case 13: // enter
- e.preventDefault();
- return;
- }
-
- hideResults();
-
- var input = this.value;
- if (input === '') {
- return;
- }
-
- var results = index.query(function (query) {
- var tokens = lunr.tokenizer(input)
- query.term(tokens, {
- boost: 10
- });
- query.term(tokens, {
- wildcard: lunr.Query.wildcard.TRAILING
- });
- });
-
- if (results.length > 0) {
- searchResults.classList.add('active');
- var resultsList = document.createElement('ul');
- resultsList.classList.add('search-results-list');
- searchResults.appendChild(resultsList);
-
- for (var i in results) {
- var result = results[i];
- var doc = docs[result.ref];
-
- var resultsListItem = document.createElement('li');
- resultsListItem.classList.add('search-results-list-item');
- resultsList.appendChild(resultsListItem);
-
- var resultLink = document.createElement('a');
- resultLink.classList.add('search-result');
- resultLink.setAttribute('href', doc.url);
- resultsListItem.appendChild(resultLink);
-
- var resultTitle = document.createElement('div');
- resultTitle.classList.add('search-result-title');
- resultTitle.innerText = doc.title;
- resultLink.appendChild(resultTitle);
-
- var resultRelUrl = document.createElement('span');
- resultRelUrl.classList.add('search-result-rel-date');
- resultRelUrl.innerText = doc.date;
- resultTitle.appendChild(resultRelUrl);
-
- var metadata = result.matchData.metadata;
- var contentFound = false;
- for (var j in metadata) {
- if (metadata[j].title) {
- var position = metadata[j].title.position[0];
- var start = position[0];
- var end = position[0] + position[1];
- resultTitle.innerHTML = doc.title.substring(0, start) + '' + doc.title.substring(start, end) + '' + doc.title.substring(end, doc.title.length)+''+doc.date+'';
-
- } else if (metadata[j].content && !contentFound) {
- contentFound = true;
-
- var position = metadata[j].content.position[0];
- var start = position[0];
- var end = position[0] + position[1];
- var previewStart = start;
- var previewEnd = end;
- var ellipsesBefore = true;
- var ellipsesAfter = true;
- for (var k = 0; k < 3; k++) {
- var nextSpace = doc.content.lastIndexOf(' ', previewStart - 2);
- var nextDot = doc.content.lastIndexOf('.', previewStart - 2);
- if ((nextDot > 0) && (nextDot > nextSpace)) {
- previewStart = nextDot + 1;
- ellipsesBefore = false;
- break;
- }
- if (nextSpace < 0) {
- previewStart = 0;
- ellipsesBefore = false;
- break;
- }
- previewStart = nextSpace + 1;
- }
- for (var k = 0; k < 10; k++) {
- var nextSpace = doc.content.indexOf(' ', previewEnd + 1);
- var nextDot = doc.content.indexOf('.', previewEnd + 1);
- if ((nextDot > 0) && (nextDot < nextSpace)) {
- previewEnd = nextDot;
- ellipsesAfter = false;
- break;
- }
- if (nextSpace < 0) {
- previewEnd = doc.content.length;
- ellipsesAfter = false;
- break;
- }
- previewEnd = nextSpace;
- }
- var preview = doc.content.substring(previewStart, start);
- if (ellipsesBefore) {
- preview = '... ' + preview;
- }
- preview += '' + doc.content.substring(start, end) + '';
- preview += doc.content.substring(end, previewEnd);
- if (ellipsesAfter) {
- preview += ' ...';
- }
-
- var resultPreview = document.createElement('div');
- resultPreview.classList.add('search-result-preview');
- resultPreview.innerHTML = preview;
- resultLink.appendChild(resultPreview);
- }
- }
- }
- }
- });
-
- // jtd.addEvent(searchInput, 'blur', function(){
- // setTimeout(function(){ hideResults() }, 300);
- // });
- }
- }
-
-// function pageFocus() {
-// var mainContent = document.querySelector('.js-main-content');
-// mainContent.focus();
-// }
-
- // Document ready
-
- jtd.onReady(function(){
- // initNav();
- // pageFocus();
- if (typeof lunr !== 'undefined') {
- initSearch();
- }
- });
-
- })(window.jtd = window.jtd || {});
\ No newline at end of file
diff --git a/assets/js/vendor/lunr.min.js b/assets/js/vendor/lunr.min.js
deleted file mode 100755
index 34b279d..0000000
--- a/assets/js/vendor/lunr.min.js
+++ /dev/null
@@ -1,6 +0,0 @@
-/**
- * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.6
- * Copyright (C) 2019 Oliver Nightingale
- * @license MIT
- */
-!function(){var e=function(t){var r=new e.Builder;return r.pipeline.add(e.trimmer,e.stopWordFilter,e.stemmer),r.searchPipeline.add(e.stemmer),t.call(r,r),r.build()};e.version="2.3.6",e.utils={},e.utils.warn=function(e){return function(t){e.console&&console.warn&&console.warn(t)}}(this),e.utils.asString=function(e){return void 0===e||null===e?"":e.toString()},e.utils.clone=function(e){if(null===e||void 0===e)return e;for(var t=Object.create(null),r=Object.keys(e),i=0;i0){var c=e.utils.clone(r)||{};c.position=[a,l],c.index=s.length,s.push(new e.Token(i.slice(a,o),c))}a=o+1}}return s},e.tokenizer.separator=/[\s\-]+/,e.Pipeline=function(){this._stack=[]},e.Pipeline.registeredFunctions=Object.create(null),e.Pipeline.registerFunction=function(t,r){r in this.registeredFunctions&&e.utils.warn("Overwriting existing registered function: "+r),t.label=r,e.Pipeline.registeredFunctions[t.label]=t},e.Pipeline.warnIfFunctionNotRegistered=function(t){var r=t.label&&t.label in this.registeredFunctions;r||e.utils.warn("Function is not registered with pipeline. This may cause problems when serialising the index.\n",t)},e.Pipeline.load=function(t){var r=new e.Pipeline;return t.forEach(function(t){var i=e.Pipeline.registeredFunctions[t];if(!i)throw new Error("Cannot load unregistered function: "+t);r.add(i)}),r},e.Pipeline.prototype.add=function(){var t=Array.prototype.slice.call(arguments);t.forEach(function(t){e.Pipeline.warnIfFunctionNotRegistered(t),this._stack.push(t)},this)},e.Pipeline.prototype.after=function(t,r){e.Pipeline.warnIfFunctionNotRegistered(r);var i=this._stack.indexOf(t);if(i==-1)throw new Error("Cannot find existingFn");i+=1,this._stack.splice(i,0,r)},e.Pipeline.prototype.before=function(t,r){e.Pipeline.warnIfFunctionNotRegistered(r);var i=this._stack.indexOf(t);if(i==-1)throw new Error("Cannot find existingFn");this._stack.splice(i,0,r)},e.Pipeline.prototype.remove=function(e){var t=this._stack.indexOf(e);t!=-1&&this._stack.splice(t,1)},e.Pipeline.prototype.run=function(e){for(var t=this._stack.length,r=0;r1&&(se&&(r=n),s!=e);)i=r-t,n=t+Math.floor(i/2),s=this.elements[2*n];return s==e?2*n:s>e?2*n:sa?l+=2:o==a&&(t+=r[u+1]*i[l+1],u+=2,l+=2);return t},e.Vector.prototype.similarity=function(e){return this.dot(e)/this.magnitude()||0},e.Vector.prototype.toArray=function(){for(var e=new Array(this.elements.length/2),t=1,r=0;t0){var o,a=s.str.charAt(0);a in s.node.edges?o=s.node.edges[a]:(o=new e.TokenSet,s.node.edges[a]=o),1==s.str.length&&(o["final"]=!0),n.push({node:o,editsRemaining:s.editsRemaining,str:s.str.slice(1)})}if(0!=s.editsRemaining){if("*"in s.node.edges)var u=s.node.edges["*"];else{var u=new e.TokenSet;s.node.edges["*"]=u}if(0==s.str.length&&(u["final"]=!0),n.push({node:u,editsRemaining:s.editsRemaining-1,str:s.str}),s.str.length>1&&n.push({node:s.node,editsRemaining:s.editsRemaining-1,str:s.str.slice(1)}),1==s.str.length&&(s.node["final"]=!0),s.str.length>=1){if("*"in s.node.edges)var l=s.node.edges["*"];else{var l=new e.TokenSet;s.node.edges["*"]=l}1==s.str.length&&(l["final"]=!0),n.push({node:l,editsRemaining:s.editsRemaining-1,str:s.str.slice(1)})}if(s.str.length>1){var c,h=s.str.charAt(0),d=s.str.charAt(1);d in s.node.edges?c=s.node.edges[d]:(c=new e.TokenSet,s.node.edges[d]=c),1==s.str.length&&(c["final"]=!0),n.push({node:c,editsRemaining:s.editsRemaining-1,str:h+s.str.slice(2)})}}}return i},e.TokenSet.fromString=function(t){for(var r=new e.TokenSet,i=r,n=0,s=t.length;n=e;t--){var r=this.uncheckedNodes[t],i=r.child.toString();i in this.minimizedNodes?r.parent.edges[r["char"]]=this.minimizedNodes[i]:(r.child._str=i,this.minimizedNodes[i]=r.child),this.uncheckedNodes.pop()}},e.Index=function(e){this.invertedIndex=e.invertedIndex,this.fieldVectors=e.fieldVectors,this.tokenSet=e.tokenSet,this.fields=e.fields,this.pipeline=e.pipeline},e.Index.prototype.search=function(t){return this.query(function(r){var i=new e.QueryParser(t,r);i.parse()})},e.Index.prototype.query=function(t){for(var r=new e.Query(this.fields),i=Object.create(null),n=Object.create(null),s=Object.create(null),o=Object.create(null),a=Object.create(null),u=0;u1?this._b=1:this._b=e},e.Builder.prototype.k1=function(e){this._k1=e},e.Builder.prototype.add=function(t,r){var i=t[this._ref],n=Object.keys(this._fields);this._documents[i]=r||{},this.documentCount+=1;for(var s=0;s=this.length)return e.QueryLexer.EOS;var t=this.str.charAt(this.pos);return this.pos+=1,t},e.QueryLexer.prototype.width=function(){return this.pos-this.start},e.QueryLexer.prototype.ignore=function(){this.start==this.pos&&(this.pos+=1),this.start=this.pos},e.QueryLexer.prototype.backup=function(){this.pos-=1},e.QueryLexer.prototype.acceptDigitRun=function(){var t,r;do t=this.next(),r=t.charCodeAt(0);while(r>47&&r<58);t!=e.QueryLexer.EOS&&this.backup()},e.QueryLexer.prototype.more=function(){return this.pos1&&(t.backup(),t.emit(e.QueryLexer.TERM)),t.ignore(),t.more())return e.QueryLexer.lexText},e.QueryLexer.lexEditDistance=function(t){return t.ignore(),t.acceptDigitRun(),t.emit(e.QueryLexer.EDIT_DISTANCE),e.QueryLexer.lexText},e.QueryLexer.lexBoost=function(t){return t.ignore(),t.acceptDigitRun(),t.emit(e.QueryLexer.BOOST),e.QueryLexer.lexText},e.QueryLexer.lexEOS=function(t){t.width()>0&&t.emit(e.QueryLexer.TERM)},e.QueryLexer.termSeparator=e.tokenizer.separator,e.QueryLexer.lexText=function(t){for(;;){var r=t.next();if(r==e.QueryLexer.EOS)return e.QueryLexer.lexEOS;if(92!=r.charCodeAt(0)){if(":"==r)return e.QueryLexer.lexField;if("~"==r)return t.backup(),t.width()>0&&t.emit(e.QueryLexer.TERM),e.QueryLexer.lexEditDistance;if("^"==r)return t.backup(),t.width()>0&&t.emit(e.QueryLexer.TERM),e.QueryLexer.lexBoost;if("+"==r&&1===t.width())return t.emit(e.QueryLexer.PRESENCE),e.QueryLexer.lexText;if("-"==r&&1===t.width())return t.emit(e.QueryLexer.PRESENCE),e.QueryLexer.lexText;if(r.match(e.QueryLexer.termSeparator))return e.QueryLexer.lexTerm}else t.escapeCharacter()}},e.QueryParser=function(t,r){this.lexer=new e.QueryLexer(t),this.query=r,this.currentClause={},this.lexemeIdx=0},e.QueryParser.prototype.parse=function(){this.lexer.run(),this.lexemes=this.lexer.lexemes;for(var t=e.QueryParser.parseClause;t;)t=t(this);return this.query},e.QueryParser.prototype.peekLexeme=function(){return this.lexemes[this.lexemeIdx]},e.QueryParser.prototype.consumeLexeme=function(){var e=this.peekLexeme();return this.lexemeIdx+=1,e},e.QueryParser.prototype.nextClause=function(){var e=this.currentClause;this.query.clause(e),this.currentClause={}},e.QueryParser.parseClause=function(t){var r=t.peekLexeme();if(void 0!=r)switch(r.type){case e.QueryLexer.PRESENCE:return e.QueryParser.parsePresence;case e.QueryLexer.FIELD:return e.QueryParser.parseField;case e.QueryLexer.TERM:return e.QueryParser.parseTerm;default:var i="expected either a field or a term, found "+r.type;throw r.str.length>=1&&(i+=" with value '"+r.str+"'"),new e.QueryParseError(i,r.start,r.end)}},e.QueryParser.parsePresence=function(t){var r=t.consumeLexeme();if(void 0!=r){switch(r.str){case"-":t.currentClause.presence=e.Query.presence.PROHIBITED;break;case"+":t.currentClause.presence=e.Query.presence.REQUIRED;break;default:var i="unrecognised presence operator'"+r.str+"'";throw new e.QueryParseError(i,r.start,r.end)}var n=t.peekLexeme();if(void 0==n){var i="expecting term or field, found nothing";throw new e.QueryParseError(i,r.start,r.end)}switch(n.type){case e.QueryLexer.FIELD:return e.QueryParser.parseField;case e.QueryLexer.TERM:return e.QueryParser.parseTerm;default:var i="expecting term or field, found '"+n.type+"'";throw new e.QueryParseError(i,n.start,n.end)}}},e.QueryParser.parseField=function(t){var r=t.consumeLexeme();if(void 0!=r){if(t.query.allFields.indexOf(r.str)==-1){var i=t.query.allFields.map(function(e){return"'"+e+"'"}).join(", "),n="unrecognised field '"+r.str+"', possible fields: "+i;throw new e.QueryParseError(n,r.start,r.end)}t.currentClause.fields=[r.str];var s=t.peekLexeme();if(void 0==s){var n="expecting term, found nothing";throw new e.QueryParseError(n,r.start,r.end)}switch(s.type){case e.QueryLexer.TERM:return e.QueryParser.parseTerm;default:var n="expecting term, found '"+s.type+"'";throw new e.QueryParseError(n,s.start,s.end)}}},e.QueryParser.parseTerm=function(t){var r=t.consumeLexeme();if(void 0!=r){t.currentClause.term=r.str.toLowerCase(),r.str.indexOf("*")!=-1&&(t.currentClause.usePipeline=!1);var i=t.peekLexeme();if(void 0==i)return void t.nextClause();switch(i.type){case e.QueryLexer.TERM:return t.nextClause(),e.QueryParser.parseTerm;case e.QueryLexer.FIELD:return t.nextClause(),e.QueryParser.parseField;case e.QueryLexer.EDIT_DISTANCE:return e.QueryParser.parseEditDistance;case e.QueryLexer.BOOST:return e.QueryParser.parseBoost;case e.QueryLexer.PRESENCE:return t.nextClause(),e.QueryParser.parsePresence;default:var n="Unexpected lexeme type '"+i.type+"'";throw new e.QueryParseError(n,i.start,i.end)}}},e.QueryParser.parseEditDistance=function(t){var r=t.consumeLexeme();if(void 0!=r){var i=parseInt(r.str,10);if(isNaN(i)){var n="edit distance must be numeric";throw new e.QueryParseError(n,r.start,r.end)}t.currentClause.editDistance=i;var s=t.peekLexeme();if(void 0==s)return void t.nextClause();switch(s.type){case e.QueryLexer.TERM:return t.nextClause(),e.QueryParser.parseTerm;case e.QueryLexer.FIELD:return t.nextClause(),e.QueryParser.parseField;case e.QueryLexer.EDIT_DISTANCE:return e.QueryParser.parseEditDistance;case e.QueryLexer.BOOST:return e.QueryParser.parseBoost;case e.QueryLexer.PRESENCE:return t.nextClause(),e.QueryParser.parsePresence;default:var n="Unexpected lexeme type '"+s.type+"'";throw new e.QueryParseError(n,s.start,s.end)}}},e.QueryParser.parseBoost=function(t){var r=t.consumeLexeme();if(void 0!=r){var i=parseInt(r.str,10);if(isNaN(i)){var n="boost must be numeric";throw new e.QueryParseError(n,r.start,r.end)}t.currentClause.boost=i;var s=t.peekLexeme();if(void 0==s)return void t.nextClause();switch(s.type){case e.QueryLexer.TERM:return t.nextClause(),e.QueryParser.parseTerm;case e.QueryLexer.FIELD:return t.nextClause(),e.QueryParser.parseField;case e.QueryLexer.EDIT_DISTANCE:return e.QueryParser.parseEditDistance;case e.QueryLexer.BOOST:return e.QueryParser.parseBoost;case e.QueryLexer.PRESENCE:return t.nextClause(),e.QueryParser.parsePresence;default:var n="Unexpected lexeme type '"+s.type+"'";throw new e.QueryParseError(n,s.start,s.end)}}},function(e,t){"function"==typeof define&&define.amd?define(t):"object"==typeof exports?module.exports=t():e.lunr=t()}(this,function(){return e})}();
diff --git a/assets/minima-social-icons.svg b/assets/minima-social-icons.svg
deleted file mode 100755
index 2f54ade..0000000
--- a/assets/minima-social-icons.svg
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
-
-
-
-
-
-
-
diff --git a/categories/index.html b/categories/index.html
deleted file mode 100755
index 336c7a1..0000000
--- a/categories/index.html
+++ /dev/null
@@ -1,721 +0,0 @@
-
Targeted programs like these are common in guaranteed income pilots, and in some enacted policies, and I find that it would cost-effectively reduce poverty: if expanded to Massachusetts, it would cost $1.2 billion per year and cut child poverty 42%.
-
-
However, that targeting comes at a cost.
-Using the OpenFisca US microsimulation model (supported by the Center for Growth and Opportunity and cataloged by the Policy Simulation Library), I find that the program would deepen an existing welfare cliff at 200% of the poverty line.
-For example, a family of four would lose over $19,000 total—$9,000 from the cash assistance and $10,000 from other benefits—once they earn a dollar above 200% of the poverty line (about $55,000).
-To recover those lost benefits, they would have to earn an additional $26,000, a range I call the “earnings dead zone”.
-
-
My presentation reviews these trends in both slides and the PolicyEngine US app for computing the impacts of tax and benefit policy.
-For example, I show how repealing the SNAP emergency allotment would smooth out welfare cliffs, while reducing resources available to low-income families, and how a universal child allowance avoids work disincentives while less cost-effectively reducing poverty.
-
-
Policymakers face trade-offs between equity and efficiency, and typically labor supply responses consider marginal tax rates.
-With their infinite marginal tax rates, welfare cliffs are a less explored area, even though they surface in several parts of the tax and benefit system.
-This paper makes a start, but more research is yet to be done.
-
-
-
\ No newline at end of file
diff --git a/demo-day-ccc-international.html b/demo-day-ccc-international.html
deleted file mode 100755
index 693bcf2..0000000
--- a/demo-day-ccc-international.html
+++ /dev/null
@@ -1,72 +0,0 @@
-
-
-
-
Demo Day: Analyzing tax competitiveness with Cost-of-Capital-Calculator
Using Cost-of-Capital-Calculator with data on international business tax policies.
In the Demo Day video shared here, I show how to use open source tools to analyze international corporate tax competitiveness.
-The two main tools illustrated are the Cost-of-Capital-Calculator (CCC), a model to compute measures of the tax burden on new investments, and Tax Foundation’s International Tax Competitiveness Index (ITCI).
-
-
Tax Foundation has made many helpful resources available online.
-Their measures of international business tax policy are a great example of this.
-The ICTI outputs and inputs are all well documented, with source code to reproduce results available on GitHub.
-
-
I plug Tax Foundation’s country-by-country data into CCC functions using it’s Python API.
-Because CCC is designed to flexibly take array or scalar data, operating on rows of tabular data, such as that in the ITCI, is relatively straight-forward.
-The Google Colab notebook I walk through in this Demo Day, can be a helpful example to follow if you’d like to do something similar to this with the Tax Foundation data - or your own data source.
-From the basic building blocks there (reading in data, calling CCC functions), you can extend the analysis in a number of ways.
-For example adding additional years of data (Tax Foundation posts their data back to 2014), modifying economic assumptions, or creating counter-factual policy experiments across sets of countries.
-
-
If you find this example useful, or have questions or suggestions about this type of analysis, please feel free to reach out to me.
Federal income tax reform impacts can vary dramatically across states.
-The cap on state and local tax deductions (SALT) is a well-known example, but other policies also have differential effects because important tax-relevant features vary across states such as the income distribution, relative importance of wage, business, and retirement income, and family size and structure.
-Analyzing how policy impacts vary across states requires data that faithfully represent the characteristics of the 50 states.
-
-
This Demo Day described a method and software for constructing state weights for microdata files that (1) come as close as possible to targets for individual states, while (2) ensuring that the state weights for each tax record sum to its national weight.
-The latter objective ensures that the sum of state impacts for a tax reform equals the national impact.
-
-
This project developed state weights for a data file with more than 200,000 microdata records.
-The weighted data file comes within 0.01% of desired values for more than 95% of approximately 10,000 targets.
-
-
The goal of the slides and video was to enable a motivated Python-skilled user of the PSL TaxData and Tax-Calculator projects to reproduce project results: 50-state weights for TaxData’s primary output, the puf.csv microdata file (based primarily on an IRS Public Use File), using early-stage open-source software developed in the project.
-Thus, the demo is technical and focused on nuts and bolts.
-
-
The methods and software can also be used to:
-
-
Create geographic-area weights for other microdata files
-
Apportion state weights to Congressional Districts or counties, if suitable targets can be developed
-
Create state-specific microdata files suitable for modeling state income taxes
-
-
-
The main topics covered in the slides and video are:
-
-
Creating national and state targets from IRS summary data
-
Preparing a national microdata file for state weighting
-
Approaches to constructing geographic weights
-
Running software that implements the Poisson-modeling approach used in the project
-
Measures of quality of the results
-
-
-
-
\ No newline at end of file
diff --git a/demo-day-contributing-psl.html b/demo-day-contributing-psl.html
deleted file mode 100755
index 4aec1ff..0000000
--- a/demo-day-contributing-psl.html
+++ /dev/null
@@ -1,75 +0,0 @@
-
-
-
-
Demo Day: Contributing to PSL projects
How to help software projects in the Policy Simulation Library.
In the most recent PSL Demo Day, I illustrate how to contribute to PSL projects.
- The open source nature of projects in the PSL catalog allows anyone to contribute.
- The modularity of the code, coupled with robust testing, means that one can bite off small pieces that help improve the models and remain confident those changes work as expected.
-
-
To begin the process of finding where to contribute to PSL projects, I advise looking through the PSL GitHub Organization to see what projects interest you.
-Once a project of interest is identified, looking over the open “Issues” can provide a sense of where model maintainers and users are looking for help (see especially the “Help Wanted” tags).
-It is also completely appropriate to create a new Issue to express interest in helping and ask for direction on where that might best be done given your experience and preferences.
-
-
When you are ready to begin to contribute to a project, you’ll want to fork and clone the GitHub repository to help you get the files on your local machine and ready for you to work with.
-Many PSL projects outline the detailed steps to get you up and running.
-For example, see the Tax-Calculator Contributor Guide, which outlines the step-by-step process for doing this and confirming that everything works as expected on your computer.
-
-
After you are set up and ready to begin modifying source code for the PSL project(s) you’re interested in contributing to, you can reference the PSL-incubating Git-Tutorial project that provides more details on the Git workflow followed by most PSL projects.
-
-
As you contribute, you may want to get more involved in the community.
-A couple ways to do this are to join any of the PSL community events, all of which are open to the public, and to post to the PSL Discourse Forums.
-These are great places to meet community members and ask questions about how and where to best contribute.
-
-
I hope this helps you get started as a PSL contributor – we look forward to getting you involved in making policy analysis better and more transparent!
I begin by illustrating the various parameters available for the user to manipulate.
-These include parameters of the business and individual income tax systems, as well as parameters representing economic assumptions (e.g., inflation rates and nominal interest rates) and parameters dictating financial and accounting policy (e.g., the fraction of financing using debt).
-Note that all default values for tax policy parameters represent the “baseline policy”, which is defined as the current law policy in the year being analyzed (which itself is a parameter the user can change).
-Other parameters are estimated using historical data following the methodology of CBO (2014).
-
-
Next, I change a few parameters and run the model.
-In this example, I move the corporate income tax rate up to 28% and lower bonus depreciation for assets with depreciable lives of 20 years or less to 50%.
-
-
Finally, I discuss how to interpret output.
-The web app returns a table and three figures summarizing marginal effective total tax rates on new investments.
-This selection of output helps give one a sense of the the overall changes, as well as effects across asset types, industries, and type of financing.
-For the full model output, one can click on “Download Results”.
-Doing so will download four CSV files contain several measures of the impact of the tax system on investment for very fine asset and industry categories.
-Users can take these files and create tables and visualizations relevant to their own use case.
-
-
Please take the model for a spin and simulate your own reform.
-If you have questions, comments, or suggestions, please let me know on the PSL Discourse (non-technical questions) or by opening an issue in the CCC GitHub repository (technical questions).
We will host Demo Days every two weeks until the end of the year.
-You can see our schedule on our events page.
-
-
-
-
Show notes:
-
-
I demonstrate how to build policy reform files using the Tax-Brain webapp on Compute Studio.
-(Useful links below.)
-This is an introductory lesson that ends with a cliffhanger.
-We don’t run the model.
-But we do generate an individual income and payroll tax reform file that is compatible with a range of policy simulation models and analytic tools, some designed for policy decision makers, others for taxpayers and benefits recipients interested in assessing their own circumstances.
-
-
Beyond individual and payroll tax analysis, the reform file can be used with models that assess pass-through and corporate taxation of businesses, as well as a variety of income benefit programs.
-A wide range of use cases will occupy future events.
In Demo Day 8, I talked about connecting multiple apps on Compute Studio with PSL Stitch. The source code for PSL stitch can be found in this repository.
-
-
Stitch is composed of three components:
-
-
-
A python package that can be run like a normal Python package.
-
A RESTful API built with FastAPI that is called remotely to create simulations on Compute Studio.
-
A GUI built with ReactJS that makes calls to the REST API to create and monitor simulations.
-
-
-
One of the cool things about this app is that it uses ParamTools to read the JSON files under the hood. This means that it can read links to data in other Compute Studio runs, files on GitHub, or just plain JSON. Here are some example parameters:
business tax parameters: {"CIT_rate": [{"value": 0.25, "year": 2021}]}
-
-
-
-
After clicking run, three simulations will be created on Compute Studio and the app will update as soon as the simulations have finished:
-
-
-
-
-
-
Once they are done, the simulations are best viewed and interacted with on Compute Studio, but you can still inspect the JSON response from the Compute Studio API:
-
-
-
-
I created this app to show that it’s possible to build apps on top of the Compute Studio API. I think PSL Stitch is a neat example of how to do this, but I am even more excited to see what others build next.
-
-
Also, this is an open source project and has lots of room for improvement. If you are interested in learning web technologies related to REST APIs and frontend development with JavaScript, then this project could be a good place to start!
Compute Studio (C/S) is a platform for publishing and sharing computational models and data visualizations. In this demo day, I show how to publish your own project on C/S using the new automated deployments feature. You can find an in depth guide to publishing on C/S
-in the developer docs.
-
-
C/S supports two types of projects: models and data visualizations. Models are fed some inputs and return a result. Data visualizations are web servers backed by popular open-source libraries like Bokeh, Dash, or Streamlit. Models are good for long-running processes and producing archivable results that can be shared and returned to easily. Data visualizations are good for highly interactive and custom user experiences.
-
-
Now that you’ve checked out the developer docs and set up your model or data-viz, you can head over to the C/S publishing page https://compute.studio/new/ to publish your project. Note that this page is still very much under construction and may look different in a few weeks.
-
-
-
-
Next, you will be sent to the second stage in the publish flow where you will provide more details on how to connect your project on C/S:
-
-
-
-
Clicking “Connect App” will take you to the project home page:
-
-
-
-
Go to the “Settings” button in the top-right corner and this will take you to the project dashboard where you can modify everything from the social preview of your project to the amount of compute resources it needs:
-
-
-
-
The “Builds” link in the sidebar will take you to the builds dashboard where you can create your first build:
-
-
-
-
It’s time to create the first build. You can do so by clicking “New Build”. This will take you to the build status page. While the build is being scheduled, the page will look like this:
-
-
-
-
You can click the “Build History” link and it will show that the build has been started:
-
-
-
-
The build status page should be updated at this point and will look something like this:
-
-
-
-
C/S automated deployments are built on top of Github Actions. Unfortunately, the logs in Github Actions are not available through the Github API until after the workflow is completely finished. The build status dashboard will update as the build progresses and once it’s done, you will have full access to the logs from the build. These will contain outputs from installing your project and the outputs from your project’s tests.
-
-
In this case, the build failed. We can inspect the logs to see that an import error caused the failure:
-
-
-
-
-
-
I pushed an update to my fork of Tax-Cruncher on Github and restarted the build by clicking “Failure. Start new Build”. The next build succeeded and we can click “Release” to publish the project:
-
-
-
-
The builds dashboard now shows the two builds:
-
-
-
-
Finally, let’s go run our new model:
-
-
-
-
It may take a few seconds for the page to load. This is because the model code and all of its dependencies are being loaded onto the C/S servers for the first time:
-
-
-
-
The steps for publishing a data visualization are very similar. The main idea is that you tell C/S what Python file your app lives in and C/S will know how to run it given your data visualization technology choice.
-
-
-
\ No newline at end of file
diff --git a/demo-day-github.html b/demo-day-github.html
deleted file mode 100755
index 13c5fdc..0000000
--- a/demo-day-github.html
+++ /dev/null
@@ -1,96 +0,0 @@
-
-
-
-
Demo Day: Getting Started with GitHub
The basics of forking and cloning repositories and working on branches.
Git and GitHub often present themselves as barriers to entry to would-be contributors to PSL projects, even for those who are otherwise experienced with policy modeling.
-But these tools are critical to collaboration on open source projects.
-In the Demo Day video linked above, I cover some of the basics to get set up and begin contributing to an open source project.
-
-
There are four steps I outline:
-
-
Create a “fork” of the repository you are interested in.
-A fork is a copy of the source code that resides on GitHub (i.e., in the cloud).
-A fork gives you control over a copy of the source code. You will be able to merge in changes to the code on this fork, even if you don’t have permissions to do so with the original repository.
-
“Clone” the fork.
-Cloning will download a copy of the source code from your fork onto your local machine.
-But cloning is more than just downloading the source code.
-It will include the version history of the code and automatically create a link between the local files and the remote files on your fork.
-
Configure your local files to talk to both your fork (which has a default name of origin) and the original repository you forked from (which typically has the default name of upstream).
-Do this by using your command prompt or terminal to navigate to the directory you just cloned.
-From there, run:
-
If things worked, you should see URLs to your fork and the upstream repository with “(fetch)” and “(push)” by them
-More info on this is in the Git docs.
-
-
-
Now that you have copies of the source code on your fork and on your local machine, you are ready to begin contributing.
-As you make changes to the source code, you’ll want to work on development branches.
-Branches are copies of the code. Ideally, you keep your “main” (or “master”) branch clean (i.e., your best version of the code) and develop the code on branches.
-When you’ve completed the development work (e.g., adding a new feature) you will them merge this into the “main” branch.
-
-
-
I hope this helps you get started contributing to open source projects.
-Git and GitHub are valuable tools and there is lots more to learn, but these basics will get you going.
-For more information, see the links below.
-If you want to get started working with a project in the Library, feel free to reach out to me through the relevant repo (@jdebacker on GitHub) or drop into a PSL Community Call (dates on the PSL Calendar).
In this PSL Demo Day, I demonstrate how to use the open source OG-USA macroeconomic model of U.S. fiscal policy. Jason DeBacker and I (Richard Evans) have been the core maintainers of this project and repository for the last six years. This demo is organized into the following sections. The YouTube webinar linked above took place on January 11, 2021.
The Policy Simulation Library is a decentralized organization of open source policy models. The Policy Simulation Library GitHub organization houses many open source repositories, each of which represents a curated policy project by a diverse group of maintainers. The projects that have met the highest standards of best practices and documentation are designated as psl-cataloged , while newer projects that are in earlier stages are designated as psl-incubating . The philosophy and goal of the PSL environment is to make policy modeling open and transparent. It also allows more collaboration and cross-project contributions and interactions.
-
-
The Policy Simulation Library group has been holding these PSL Demo Day webinars since the end of 2020. The video of each webinar is available on the Policy Simulation Library YouTube channel. These videos are a great resource for learning the different models available in the PSL community, how the models interact, how to contribute to them, and what is on the horizon in their development. Also excellent in many of the PSL Demo Day webinars is a demonstration of how to use the models on the Compute Studio web application platform.
-
-
I have been a participant in and contributor to the PSL community since its inception. I love economic policy modeling. And I learned how sophisticated and complicated economic policy models can be. And any simulation can have hundreds of underlying assumptions, some of which may not be explicitly transparent. I think models that are used for public policy analysis have a philosophical imperative to be open source. This allows others to verify results and test sensitivity to assumptions.
-
-
Another strong benefit of open source modeling is that it is fundamentally apolitical. With proprietary closed-source policy models, an outside observer might criticize the results of the model based on the perceived political biases of the modeler or the sponsoring organization. With open-source models, a critic can be redirected to the underlying assumptions, structure, and content of the model. This is constructive criticism and debate that moves the science foreward. In the current polarized political environment in the U.S., open-source modeling can provide a constructive route for bipartisan cooperation and the democratization of computational modeling. Furthermore, open-source modeling and workflow encourages the widest forms of collaboration and contributions.
-
-
Description of OG-USA model
-
-
OG-USA is an open-source overlapping generations, dynamic general equilibrium, heterogeneous agent, macroeconomic model of U.S. fiscal policy. The GitHub repository for the OG-USA source code is github.com/PSLmodels/OG-USA. This repository contains all the source code and instructions for loading and running OG-USA and all of its dependencies on your local machine. We will probably do another PSL Demo Day on how to run OG-USA locally. This Demo Day webinar is about running OG-USA on the Compute Studio web application. See Section “Using OG-USA on Compute.Studio” below.
-
-
As a heterogeneous agent macroeconomic model, OG-USA allows for distributional analyses at the individual and firm level. That is, you can simulate the model and answer questions like, “How will an increase in the top three personal income tax rates affect people of different ages and income levels?” Microsimulation models can answer these types of distributional analysis questions as well. However, the difference between a macroeconomic model and a microsimulation model is that the macroeconomic models can simulate how each of those individuals and firms will respond to a policy change (e.g., lower labor supply or increased investment demand) and how those behavioral responses will add up and feed back into the macroeconomy (e.g., the effect on GDP, government revenue, government debt, interest rates, and wages).
-
-
OG-USA is a large-scale model and comprises tens of thousands of lines of code. The status of all of this code being publicly available on the internet with all collaboration and updates also public makes this an open source project. However, it is not enough to simply post one’s code. We have gone to great lengths to make in-line comments or “docstring” in the code to clarify the purpose of each function and line of code. For example, look in the OG-USA/ogusa/household.py module. The first function on line 18 is the marg_ut_cons() function. As is described in its docstring, its purpose is to “Compute the marginal utility of consumption.”
-
-
These in-code docstrings are not enough. We have also created textbook style OG-USA documentation at pslmodels.github.io/OG-USA/ using the Jupyter Book medium. This form of documentation has the advantage of being in book form and available online. It allows us to update the documentation in the open-source repository so changes and versions can be tracked. It describes the OG-USA API, OG-USA theory, and `OG-USA calibration. As with the model, this documentation is always a work in progress. But being open-source allows outside contributors to help with its updated and error checking.
-
-
One particular strength of the OG-USA model I want to highlight is its interaction with microsimulation models to incorporate information about tax incentives faced by the heterogeneous households in the model. We have interfaced OG-USA with microsimulation models in India and in the European Commission. OG-USA ‘s default for modeling the United States is to use the open-source Tax-Calculator microsimulation model, which was described by Anderson Frailey in the last Demo Day of 2020. However, DeBacker and I currently have a project in which we use OG-USA to simulate policies using the Tax Policy Center’s microsimulation model. The way OG-USA interfaces with microsimulation models to incorporate rich tax data is described in the documentation in the calibration chapter entitled, “Tax Functions”.
-
-
Using OG-USA on Compute Studio
-
-
In the demonstration, I focus on how to run experiments and simulations with OG-USA using the Compute Studio web application platform rather than installing running the model on your local machine. To use OG-USA on this web application, you will need a Compute Studio account. Once you have an account, you can start running any model available through the site. For some models, you will have to pay for the compute time, although the cost of running these models is very modest. However, all Compute Studio simulations of the OG-USA model are currently sponsored by the Open Source Economics Laboratory. This subsidy will probably run out in the next year. But we are always looking for funding for these models.
-
-
Once you are signed up and logged in to your Compute Studio account, you can go to the OG-USA model on Compute Studio at compute.studio/PSLmodels/OG-USA. The experiment that we simulated in the demonstration is available at compute.studio/PSLmodels/OG-USA/206. The description at the top of the simulation page describes the changes we made. You can look through the input page by clicking on the “Inputs” tab. We ran the model by clicking the green “Run” button at the lower left of the page. The model took about 5 hours to run, so I pre-computed the results that we discussed in the demo. The outputs of the experiment are available in the “Outputs” tab on the page. I also demonstrated how one can click the “Download Results” button at the bottom of the “Outputs” tab to download more results from the simulation. However, the full set of results is only available by installing and running the OG-USA model simulation on your local machine.
-
-
The benefits of the Compute Studio web application are that running the OG-USA model is much easier for the non-expert, and the multiple-hour computation time can be completed on a remote machine in the cloud.
Open source projects must maintain clear and up-to-date documentation in order to attract users and contributors.
-Because of this, PSL sets minimum standards for documentation among cataloged projects in its model criteria.
-A recent innovation in executable books, Jupyter Book, has provided an excellent format for model documentation and has been widely adopted by PSL projects (see for example OG-USA, Tax-Brain, Tax-Calculator).
-
-
Jupyter Book allows one to write documents with executable code and text together, as in Jupyter notebooks.
-But Jupyter Book pushes this further by allowing documents with multiple sections, better integration of TeX for symbols and equations, BibTex style references and citations, links between sections, and Sphinx integration (for auto-built documentation of model APIs from source code).
-Importantly for sharing documentation, Jupyter Books can easily be compiled to HTML, PDF, or other formats.
-Portions of a Jupyter Book that contain executable code can be downloaded as Jupyter Notebooks or opened in Google Colab or binder
-
-
The Jupyter Book documentation is excellent and will help you get started creating your “book” (tip: pay close attention to formatting details, including proper whitespace).
-What I do here is outline how you can easily deploy your documentation to the web and keep it up-to-date with your project.
-
-
I start from the assumption that you have the source files to build your Jupyter Book checked into the main branch of your project (these maybe yml , md , rst , ipynb or other files).
-For version control purposes and to keep your repo trim, you generally don’t want to check the built documentation files to this branch (tip: consider adding the folder these files will go to (e.g., /_build to your .gitignore ).
-When these files are in place and you can successfully build your book locally, it’s time for the first step.
-
-
Step 1: Add two GH Actions to your project’s workflow:
-
-
An action to check that your documentation files build without an error.
-I like to run this on each push to a PR.
-The action won’t hang on warnings, but will fail if your Jupyter Book doesn’t build at all.
-An example of this action from the OG-USA repo is here:
-
-
-
-name: Check that docs build
-on: [push, pull_request]
-
-jobs:
- build:
- runs-on: ubuntu-latest
- steps:
- - name: Checkout
- uses: actions/checkout@v2 # If you're using actions/checkout@v2 you must set persist-credentials to false in most cases for the deployment to work correctly.
- with:
- persist-credentials: false
-
- - name: Setup Miniconda
- uses: conda-incubator/setup-miniconda@v2
- with:
- activate-environment: ogusa-dev
- environment-file: environment.yml
- python-version: 3.7
- auto-activate-base: false
-
- - name: Build # Build Jupyter Book
- shell: bash -l {0}
- run: |
- pip install jupyter-book
- pip install sphinxcontrib-bibtex==1.0.0
- pip install -e .
- cd docs
- jb build ./book
-
-
-
To use this in your repo, you’ll just need to change a few settings such as the name of the environment and perhaps the Python version and path to the book source files.
-Note that in the above yml file sphinxcontrib-bibtex is pinned.
-You maybe able to unpin this, but OG-USA needed this pin for documentation to compile property due to changes in the jupyter-book and sphinxcontrib-bibtex packages.
-
-
-
An action that builds and deploys the Jupyter Book to GH Pages.
-The OG-USA project uses the deploy action from James Ives in this action.
-This is something that you will want to run when PRs are merged into your main branch so that the documentation is kept up-to-date with the project.
-To modify this action for your repo, you’ll need to change the repo name, the environment name, and potentially the Python version, branch name, and path to the book source files.
-
-
-
Step 2: Once the action in (2) above is run, your compiled Jupyter Book docs will be pushed to a gh-pages branch in your repository (the action will create this branch for you if it doesn’t already exist).
-At this point, you should be able to see your docs at the url https://GH_org_name.github.io/Repo_name .
-But it probably won’t look very good until you complete this next step.
-To have your Jupyter Book render on the web as you see it on your machine, you will want to push and merge an empty file with the name .nojekyll into your repo’s gh-pages branch.
-
-
That’s it!
-With these actions, you’ll be sure that your book continues to compile and a new version will be published to the web with with each merge to your main branch, ensuring that your documentation stays up-to-date.
-
-
Some additional tips:
-
-
Use Sphinx to document your projects API.
-By doing so you’ll automate an important part of your project’s documentation – as long as the docstrings are updated when the source code is, the Jupyter Book you are publishing to the web will be kept in sync with no additional work needed.
-
You can have your gh-pages-hosted documentation point to a custom URL.
-
Project maintainers should ensure that docs are updated with PRs that are relevant (e.g., if the PR changes an the source code affecting a user interface, then documentation showing example usage should be updated) and help contributors make the necessary changes to the documentation source files.
-
-
-
-
\ No newline at end of file
diff --git a/demo-day-og-core.html b/demo-day-og-core.html
deleted file mode 100755
index e75e595..0000000
--- a/demo-day-og-core.html
+++ /dev/null
@@ -1,79 +0,0 @@
-
-
-
-
Demo Day: The OG-Core platform
A Python platform for building country-specific overlapping generations general equilibrium models.
The OG-Core model is a general equilibrium, overlapping generations (OG) model suitable for evaluating fiscal policy.
-Since the work of Alan Auerbach and Laurence Kotlikoff in the 1980s, this class of model has become a standard in the macroeconomic analysis of tax and spending policy.
-This is for good reason.
-OG models are able to capture the impacts of taxes and spending in the short and long run, examine incidence of policy across generations of people (not just short run or steady state analysis of a cross-section of the economy), and capture important economic dynamics (e.g., crowding out effects of deficit-financed policy).
-
-
In the PSL Demo Day presentation linked above, I cover the basics of OG-Core: its history, its API, and how country-specific models can use OG-Core as a dependency.
-In brief, OG-Core provides a general overlapping generations framework, from which parameters can be calibrated to represent particular economies.
-Think of it this way: an economic model is just a set of parameters plus a system of equations.
-OG-Core spells out all of the equations to represent an economy with heterogeneous agents, production and government sectors, open economy options, and detailed policy rules.
-OG-Core also includes default values for all parameters, along with parameter metadata and parameter validation rules.
-A country specific application is then just a particular parameterization of the general OG-Core model.
-
-
As an example of a country-specific application, one can look at the OG-USA model.
-This model provides a calibration of OG-Core to the United States.
-The source code in that project allows one to go from raw data sources to the estimation and calibration procedures used to determine parameter values representing the United States, to parameter values in formats suitable for use in OG-Core.
-Country-specific models like OG-USA include (where available) links to microsimulation models of tax and spending programs to allow detailed microdata of actual and counterfactual policies to inform the net tax-transfer functions used in the OG-Core model.
-For those interested in building their own country-specific model, the OG-USA project provides a good example to work from.
-
-
We encourage you to take a look at OG-Core and related projects.
-New contributions and applications are always welcome.
-If you have questions or comments, reach out through the relevant repositories on Github to me, @jdebacker, or Rick Evans, @rickecon.
PolicyEngine is a nonprofit that builds free, open-source software to compute the impact of public policy.
-After launching our UK app in October 2021, we’ve just launched our US app, which calculates households’ federal taxes and several benefit programs, both under current law and under customizable policy reforms.
-
-
In this Demo Day, I provide background on PolicyEngine and demonstrate how to use PolicyEngine US (a Policy Simulation Library cataloged model) to answer a novel policy question:
-
-
-
How would doubling both (a) the Child Tax Credit and (b) the Supplemental Nutrition Assistance Program (SNAP) net income limit affect a single parent in California with $1,000 monthly rent and $50 monthly broadband costs?
-
-
-
By bringing together tax and benefit models into a web interface, we can answer this question quickly without programming experience, as well as an unlimited array of questions like it.
-The result is a table breaking down the household’s net income by program, as well as graphs of net income and marginal tax rates as the household’s earnings vary.
-
-
I close with a quick demo of PolicyEngine UK, which adds society-wide results like the impact of reforms on the budget, poverty, and inequality, as well as contributed policy parameters.
-We’re planning to bring those features to PolicyEngine US, along with state tax and benefit programs in all 50 states, over the next two years (if not sooner).
-
-
Feel free to explore the app and reach out with any questions at max@policyengine.org.
For Monday’s PSL Demo Day, I showed how to use the scf and microdf PSL Python packages from the Google Colab web-based Jupyter notebook interface.
-
-
The scf package extracts data from the Federal Reserve’s Survey of Consumer Finances, the canonical source of US wealth microdata.
-scf has a single function: load(years, columns) , which then returns a pandasDataFrame with the specified column(s), each record’s survey weight, and the year (when multiple years are requested).
-
-
The microdf package analyzes survey microdata, such as that returned by the scf.load function.
- It offers a consistent paradigm for calculating statistics like means, medians, sums, and inequality statistics like the Gini index.
- Most functions are structured as follows: f(df, col, w, groupby) where df is a pandasDataFrame of survey microdata, col is a column(s) name to be summarized, w is the weight column, and groupby is the column(s) to group records in before summarizing.
-
-
Using Google Colab, I showed how to use these packages to quickly calculate mean, median, and total wealth from the SCF data, without having to install any software or leave the browser.
- I also demonstrated how to use the groupby argument of microdf functions to show how different measures of wealth inequality have changed over time.
- Finally, I previewed some of what’s to come with scf and microdf : imputations, extrapolations, inflation, visualization, and documentation, to name a few priorities.
It’s often useful to be able to identify the effects of specific provisions individually and not just the overall impact of a proposal with many provisions.
-Indeed, when revenue estimates of tax law changes are reported (such as this JCT analysis of the American Rescue Plan Act of 2021), they are typically reported on a provision-by-provision basis.
-Finding the provision-by-provision revenue estimates is cumbersome with the Tax-Brain web application both because it’s hard to iterate over many provisions and because the order matters when stacking estimates, so that one needs to keep this order in mind as parameter values are updated for each additional provision in a full proposal.
-
-
In the PSL Demo Day on April 5, 2021, I show how to use the Python API of Tax-Calculator to efficiently produce stacked revenue estimates.
-In fact, after some initial setup, this can be done with just 12 lines of code (plus a few more to make the output look nice).
-The Google Colab notebook that illustrates this approach can be found at this link, but here I’ll walk through the four steps that are involved:
-
-
-
-Divide up the full proposal into strings of JSON text that contain each provision you want to analyze.
-My example breaks up the Biden 2020 campaign proposal into seven provisions, but this is illustrative and you can make more or less provisions depending on the detail you would like to see.
-
-Create a dictionary that contains, as its values, the JSON strings.
-A couple notes on this.
-First, the dictionary keys should be descriptive of the provisions as they will become the labels for each provision in the final table of revenue estimates we produce.
-Second, order matters here.
-You’ll want to be sure the current law baseline is first (the value for this will be an empty dictionary).
-Then you specify the provisions.
-The order you specify will likely affect your revenue estimates from a given provision (for instance, expanding/restricting a deduction has a larger revenue effect when rates are higher), but there are not hard and fast rules on the “right” order.
-Traditionally, rate changes are stacked first and tax expenditures later in the order.
-
-Iterate over this dictionary.
-With a dictionary of provisions in hand, we can write a “for loop” to iterate over the provision, simulating the Tax-Calculator model at each step.
-Note that when the Policy class object in Tax-Calculator is modified, it only needs to be told the changes in tax law parameters relative to its current state.
-In other words, when we are stacking provisions, estimating the incremental effect of each, you can think of the Policy object having a baseline policy that is represented by the current law baseline plus all provisions that have been analyzed before the provision at the current iteration.
-The Policy class was created in this way so that one can easily represent policy changes, requiring the user to only input the set of parameters that are modified, not every single parameter’s value under the hypothetical policy.
-But this also makes it parsimonious to stack provisions as we are doing here.
-Notice that the JSON strings for each provision (created in Step 1) can be specified independent of the stacking order.
-We only needed to slice the full set of proposals into discrete chunks, we didn’t need to worry about creating specifications of cumulative policy changes.
-
-Format output for presentation.
-After we’ve run a Tax-Calculator simulation for the current law baseline plus each provision (and each year in the budget window), we’ve got all the output we need.
-With this output, we can quickly create a table that will nicely present our stacked revenue estimate.
-One good check to do here is to create totals across all provisions and compare this to the simulated revenue effects of running the full set of proposals in one go.
-This check helps to ensure that you didn’t make an error in specifying your JSON strings.
-For example, it’s easy to leave out one or more provisions, especially if there are many.
-
-
-
I hope this provides a helpful template for your own analysis.
-Note that one can modify this code in several useful ways.
-For example, within the for-loops, the Behavioral-Responses can be called to produce revenue estimates that take into account behavioral feedback.
-Or one could store the individual income tax and payroll tax revenue impacts separately (rather than return the combined values as in the example notebook).
-Additional outputs (even the full set of microdata after each provision is applied) can be stored for even more analysis.
-
-
In the future, look for Tax-Brain to add stacked revenue estimates to its capabilities.
-It’ll still be important for users to carve up their full list of policy changes into sets of provisions as we did in Steps 1 and 2 above, but Tax-Brain will then take care of the rest behind the scenes.
Suppose a policy analyst sought to estimate the impact of a policy that changed income tax rates and benefit rules while also adding a progressive wealth tax.
-The standard approach is to use a microsimulation model, where the rules are programmed as code, and then to run that program over a representative sample of households.
-Unfortunately, no single US government survey captures all the households characteristics needed to analyze this policy; in particular, the reliable tax and benefit information lies in surveys like the Current Population Survey (CPS), while wealth lies in the Survey of Consumer Finances (SCF).
-
-
Assuming the analyst wanted to start with the CPS, they have several options to estimate wealth for households to levy the progressive wealth tax.
-Two typical approaches include:
-
-
Linear regression, predicting wealth from other household characteristics common to the CPS and SCF.
-
Matching, in which each CPS household is matched with the most similar household in the SCF.
-
-
-
Neither of these approaches, however, aim to estimate the distribution of wealth conditional on other characteristics.
-Linear regression explicitly estimates the mean prediction, but that could miss the tails of wealth from whom most of the wealth tax revenue will be collected.
-
-
Instead, the analyst could apply quantile regression to estimate the distribution of wealth conditional on other characteristics, and then measure the effectiveness of the estimation using quantile loss.
-
-
In this Demo Day, I present the concepts of microsimulation, imputation, and quantile loss to motivate the synthimpute Python package I’ve developed with my PolicyEngine colleague Nikhil Woodruff.
-In an experiment predicting wealth on a holdout set from the SCF, my former colleague Deepak Singh and I found that random forests significantly outperform OLS and matching for quantile regression, and this is the approach applied in synthimpute for both data fusion and data synthesis.
-The synthimpute API will be familiar to users of scikit-learn and statsmodels , with the difference being that synthimpute ‘s rf_impute function returns a random value from the predicted distribution; it can also skew the predictions to meet a target total.
-
-
We’ve used synthimpute to fuse data for research reports at the UBI Center and to enhance the PolicyEngine web app for UK tax and benefit simulation, and we welcome new users and contributors.
-Feel free to explore the repository or contact me with questions at max@policyengine.org.
The TaxBrain project was primarily created to serve as the backend of the Tax-Brain web-application.
-But at its core, TaxBrain is a Python package that greatly simplifies tax policy analysis.
-For this PSL Demo-Day, I demonstrated TaxBrain’s capabilities as a standalone package, and how to use it to produce high-level summaries of the revenue impacts of proposed tax policies.
-The Jupyter Notebook from the presentation can be found here.
-
-
TaxBrain’s Python API allows you to run a full analysis of income tax policies in just three lines of code:
Where START_YEAR and END_YEAR are the first and last years, respectively, of the analysis; use_cps is a boolean indicator that you want to use the CPS-based microdata file prepared for use with Tax-Calculator; and REFORM_POLICY is either a JSON file or Python dictionary that specifies a reform suitable for Tax-Calculator.
-The forthcoming release of TaxBrain will also include a feature that allows you to perform a stacked revenue analysis as well.
-The inspiration for this feature was presented by Jason DeBacker in a previous demo-day.
-
-
Once TaxBrain has been run, there are a number of methods and functions included in the package to create tables and plots to summarize the results.
-I used the Biden 2020 campaign proposal in the demo and the resulting figures are below.
-The first is a “volcano plot” that makes it easy to see the magnitude of the change in tax liability individuals across the income distribution face.
-Each dot represents a tax unit, and the x and y variables can be customized based on the user’s needs.
-
-
-
-
The second gives a higher-level look at how taxes change in each income bin.
-It breaks down what percentage of each income bin faces a tax increase or decrease, and the size of that change.
-
-
-
-
The final plot shown in the demo simply shows tax liabilities by year over the budget window.
-
-
-
-
The last feature I showed was TaxBrain’s automated reports.
-TaxBrain uses saved results and an included report template to write a report summarizing the findings of your simulation.
-The reports include tables and figures similar to what you may find in similar write ups by the Joint Committee on Taxation or Tax Policy Center including a summary of significant changes caused by the reform, and all you need is one line of code:
The above code will save a PDF copy of the report in a directory called biden along with PNG files for each of the graphs created and the raw Markdown text used for the report, which you can then edit as needed if you would like to add content to the report that is not already included.
-Screenshots of the default report are included below.
-
-
-
-
-
-
-
-
There are of course downsides to using TaxBrain instead of Tax-Calculator directly.
-Specifically, it’s more difficult, and sometimes impossible, to perform custom tasks like modeling a feature of the tax code that hasn’t been added to Tax-Calculator yet or advanced work with marginal tax rates.
-But for day-to-day tax modeling, the TaxBrain Python package can significantly simply any workflow.
For this PSL demo-day I showed how to use the Tax-Brain web-application,
- hosted on Compute Studio, to analyze proposed individual income tax policies.
-Tax-Brain integrates the Tax-Calculator
-and Behavioral-Responses
-models to make running both static and dynamic analyses of the US federal income
-and payroll taxes simple. The web interface for the model makes it possible for
-anyone to run their own analyses without writing a single line of code.
-
-
We started the demo by simply walking through the interface and features of the
-web-app before creating our own sample reform to model. This reform, which to
-my knowledge does not reflect any proposals currently up for debate, included
-changes to the income and payroll tax rates, bringing back personal exemptions,
-modifying the standard deduction, and implementing a universal basic income.
-
-
While the model ran, I explained how Tax-Brain validated all of the user inputs,
-the data behind the model, and how the final tax liability projections are
-determined. We concluded by looking through the variety of tables and graphs
-Tax-Brain produces and how they can easily be shared with others.
For the Demo Day on November 16, I showed how to calculate a taxpayer’s liabilities under current law and under a policy reform with Tax-Cruncher.
-The Tax-Cruncher web application takes two sets of inputs: a taxpayer’s demographic and financial information and the provisions of a tax reform.
-
-
For the first Demo Day example (3:50), we looked at how eliminating the state and local tax (SALT) deduction cap and applying payroll tax to earnings above $400,000 would affect a high earner.
-In particular, our hypothetical filer had $500,000 in wages, $100,000 in capital gains, and $100,000 in itemizable expenses.
-You can see the results at Compute Studio simulation #634.
-
-
For the second example (17:50), we looked at how expanding the Earned Income Tax Credit (EITC) and Child Tax Credit would impact a family with $45,000 in wages and two young children.
-You can see the results at Compute Studio simulation #636.
I demonstrate how to move a policy reform file from Tax-Brain to Tax-Cruncher using the Compute.Studio API.
-See the Demo C/S simulation linked below for text instructions that accompany the video.
Unit testing is the testing of individual units or functions of a software application.
-This differs from regression testing that focuses on the verification of final outputs.
-Instead, unit testing tests each smallest testable component of your code.
-This helps to more easily identify and trace errors in the code.
-
-
Writing unit tests is good practice, though not one that’s always followed.
-The biggest barrier to writing unit tests is that doing so takes time.
-You might wonder “why am I testing code that runs?”
-But there are a number benefits to writing unit tests:
-
-
It ensures that the code does what you expect it to do
-
You’ll better understand what your code is doing
-
You will reduce time tracking down bugs in your code
-
-
-
Often, writing unit tests will save you time in the longer run because it reduces debugging time and because it forces you to think more about what your code does, which often leads to the development of more efficient code.
-And for open source projects, or projects with many contributors, writing unit tests is important in reducing the likelihood that errors are introduced into your code.
-This is why the PSL catalog criteria requires projects to provide at least some level of unit testing.
-
-
In the PSL Demo Day video linked above, I illustrate how to implement unit tests in R using the testthat package. There are essentially three steps to this process:
-
-
Create a directory to put your testing script in, e.g., a folder called tests
-
-
Create one or more scripts that define your tests.
-
-
Each test is represented as a call of the test_that function and contain an statement that will evaluate as true or false (e.g., you may use the expect_equal function to verify that a function returns expected values given certain inputs).
-
You will want to use test in the name of these tests scripts as well as something descriptive of what is tested.
-
-
-
Create a script that will run your tests.
-
-
Here you’ll need to import the testthat package and you’ll need to call the script(s) you are testing to load their functions.
-
Then you’ll use the test_dir function to pass the directory in which the script(s) you created in Step 2 reside.
-
-
-
-
-
Check out the video to see examples of how each of these steps is executed.
-I’ve also found this blog post on unit tests with testthat to be helpful.
-
-
Unit testing in Python seems to be more developed and straightforward with the excellent pytest package.
-While pytest offers many options for parameterizing tests, running tests in parallel, and more, the basic steps remain the same as those outlined above:
-
-
Create a directory for your test modules (call this folder tests as pytest will look for that name).
-
Create test modules that define each test
-
-
Tests are defined much like any other function in Python, the but will involve some assertion statement is triggered upon test failure.
-
You will want to use test in the name of these tests modules as well as something descriptive of what is tested.
-
-
-
You won’t need to create a script to run your tests as with testthat, but you may create a pytest.ini file to customize your tests options.
-
-
-
That’s about it to get started writing unit tests for your code. PSL cataloged projects provide many excellent examples of a variety of unit tests, so search them for examples to build from.
-In a future Demo Day and blog post, we’ll talk about continuous integration testing to help get even more out of you unit tests.
-
\ No newline at end of file
diff --git a/demo-days/2020/11/18/demo-day-creating-reform-files/index.html b/demo-days/2020/11/18/demo-day-creating-reform-files/index.html
new file mode 100644
index 0000000..db6e8c6
--- /dev/null
+++ b/demo-days/2020/11/18/demo-day-creating-reform-files/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/2021/03/02/demo-day-taxbrain-to-taxcruncher/index.html b/demo-days/2021/03/02/demo-day-taxbrain-to-taxcruncher/index.html
new file mode 100644
index 0000000..a6917a0
--- /dev/null
+++ b/demo-days/2021/03/02/demo-day-taxbrain-to-taxcruncher/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/OG-USA/Tax-Calculator/open-source/policy-simulation-library/compute-studio/us/2021/01/28/demo-day-how-to-use-og-usa/index.html b/demo-days/OG-USA/Tax-Calculator/open-source/policy-simulation-library/compute-studio/us/2021/01/28/demo-day-how-to-use-og-usa/index.html
new file mode 100644
index 0000000..5ef563c
--- /dev/null
+++ b/demo-days/OG-USA/Tax-Calculator/open-source/policy-simulation-library/compute-studio/us/2021/01/28/demo-day-how-to-use-og-usa/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/PSL/2021/03/08/demo-day-cs-api-stitch/index.html b/demo-days/PSL/2021/03/08/demo-day-cs-api-stitch/index.html
new file mode 100644
index 0000000..cee3dff
--- /dev/null
+++ b/demo-days/PSL/2021/03/08/demo-day-cs-api-stitch/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/PSL/git/github/2021/03/02/demo-day-contributing-psl/index.html b/demo-days/PSL/git/github/2021/03/02/demo-day-contributing-psl/index.html
new file mode 100644
index 0000000..ed42c4b
--- /dev/null
+++ b/demo-days/PSL/git/github/2021/03/02/demo-day-contributing-psl/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/R/Python/unit-testing/2021/08/09/demo-day-unit-testing/index.html b/demo-days/R/Python/unit-testing/2021/08/09/demo-day-unit-testing/index.html
new file mode 100644
index 0000000..2e8d0be
--- /dev/null
+++ b/demo-days/R/Python/unit-testing/2021/08/09/demo-day-unit-testing/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/apps/taxes/benefits/us/2022/04/12/demo-day-policyengine-us/index.html b/demo-days/apps/taxes/benefits/us/2022/04/12/demo-day-policyengine-us/index.html
new file mode 100644
index 0000000..109c37d
--- /dev/null
+++ b/demo-days/apps/taxes/benefits/us/2022/04/12/demo-day-policyengine-us/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/benefits/us/2022/07/14/demo-day-cambridge-cash-assistance/index.html b/demo-days/benefits/us/2022/07/14/demo-day-cambridge-cash-assistance/index.html
new file mode 100644
index 0000000..9e431f5
--- /dev/null
+++ b/demo-days/benefits/us/2022/07/14/demo-day-cambridge-cash-assistance/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/cost-of-capital-calculator/business-taxation/corporate-income-tax/2020/12/03/demo-day-cost-of-capital-calculator/index.html b/demo-days/cost-of-capital-calculator/business-taxation/corporate-income-tax/2020/12/03/demo-day-cost-of-capital-calculator/index.html
new file mode 100644
index 0000000..c833833
--- /dev/null
+++ b/demo-days/cost-of-capital-calculator/business-taxation/corporate-income-tax/2020/12/03/demo-day-cost-of-capital-calculator/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/cost-of-capital-calculator/business-taxation/corporate-income-tax/taxes/2022/04/18/demo-day-ccc-international/index.html b/demo-days/cost-of-capital-calculator/business-taxation/corporate-income-tax/taxes/2022/04/18/demo-day-ccc-international/index.html
new file mode 100644
index 0000000..42f4d70
--- /dev/null
+++ b/demo-days/cost-of-capital-calculator/business-taxation/corporate-income-tax/taxes/2022/04/18/demo-day-ccc-international/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/github/git/workflow/getting-started/2022/06/28/demo-day-github/index.html b/demo-days/github/git/workflow/getting-started/2022/06/28/demo-day-github/index.html
new file mode 100644
index 0000000..d32ef6b
--- /dev/null
+++ b/demo-days/github/git/workflow/getting-started/2022/06/28/demo-day-github/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/individual-income-tax/tax-brain/tax-calculator/2021/04/05/demo-day-stacked-revenue-estimates/index.html b/demo-days/individual-income-tax/tax-brain/tax-calculator/2021/04/05/demo-day-stacked-revenue-estimates/index.html
new file mode 100644
index 0000000..2e8d006
--- /dev/null
+++ b/demo-days/individual-income-tax/tax-brain/tax-calculator/2021/04/05/demo-day-stacked-revenue-estimates/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/individual-income-tax/tax-brain/tax-calculator/2021/06/14/demo-day-tax-brain-python-api/index.html b/demo-days/individual-income-tax/tax-brain/tax-calculator/2021/06/14/demo-day-tax-brain-python-api/index.html
new file mode 100644
index 0000000..5ad965e
--- /dev/null
+++ b/demo-days/individual-income-tax/tax-brain/tax-calculator/2021/06/14/demo-day-tax-brain-python-api/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/individual-income-tax/tax-brain/us/2020/12/23/demo-day-tax-brain/index.html b/demo-days/individual-income-tax/tax-brain/us/2020/12/23/demo-day-tax-brain/index.html
new file mode 100644
index 0000000..8e37583
--- /dev/null
+++ b/demo-days/individual-income-tax/tax-brain/us/2020/12/23/demo-day-tax-brain/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/individual-income-tax/tax-cruncher/2020/11/23/demo-day-tax-cruncher/index.html b/demo-days/individual-income-tax/tax-cruncher/2020/11/23/demo-day-tax-cruncher/index.html
new file mode 100644
index 0000000..30db981
--- /dev/null
+++ b/demo-days/individual-income-tax/tax-cruncher/2020/11/23/demo-day-tax-cruncher/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/jupyter-book/GH-actions/documentation/2021/05/17/demo-day-jupyter-book-deploy/index.html b/demo-days/jupyter-book/GH-actions/documentation/2021/05/17/demo-day-jupyter-book-deploy/index.html
new file mode 100644
index 0000000..5dcd73f
--- /dev/null
+++ b/demo-days/jupyter-book/GH-actions/documentation/2021/05/17/demo-day-jupyter-book-deploy/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/microdf/scf/2021/01/29/demo-day-scf-microdf/index.html b/demo-days/microdf/scf/2021/01/29/demo-day-scf-microdf/index.html
new file mode 100644
index 0000000..8396821
--- /dev/null
+++ b/demo-days/microdf/scf/2021/01/29/demo-day-scf-microdf/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/policy-simulation-library/compute-studio/2021/09/20/demo-day-cs-auto-deploy/index.html b/demo-days/policy-simulation-library/compute-studio/2021/09/20/demo-day-cs-auto-deploy/index.html
new file mode 100644
index 0000000..de2674e
--- /dev/null
+++ b/demo-days/policy-simulation-library/compute-studio/2021/09/20/demo-day-cs-auto-deploy/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/python/data-fusion/synthimpute/2021/12/08/demo-day-synthimpute/index.html b/demo-days/python/data-fusion/synthimpute/2021/12/08/demo-day-synthimpute/index.html
new file mode 100644
index 0000000..1b88bdf
--- /dev/null
+++ b/demo-days/python/data-fusion/synthimpute/2021/12/08/demo-day-synthimpute/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/python/macroeconomics/overlapping-generations/2021/11/01/demo-day-og-core/index.html b/demo-days/python/macroeconomics/overlapping-generations/2021/11/01/demo-day-og-core/index.html
new file mode 100644
index 0000000..77f7c15
--- /dev/null
+++ b/demo-days/python/macroeconomics/overlapping-generations/2021/11/01/demo-day-og-core/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/demo-days/tax/data/us/2021/07/16/demo-day-constructing-tax-data-for-the-50-states/index.html b/demo-days/tax/data/us/2021/07/16/demo-day-constructing-tax-data-for-the-50-states/index.html
new file mode 100644
index 0000000..30ad9e9
--- /dev/null
+++ b/demo-days/tax/data/us/2021/07/16/demo-day-constructing-tax-data-for-the-50-states/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/feed.xml b/feed.xml
deleted file mode 100755
index aaa3a0b..0000000
--- a/feed.xml
+++ /dev/null
@@ -1 +0,0 @@
-Jekyll2023-12-29T12:59:29-06:00https://blog.pslmodels.org/feed.xmlPSL blogUpdates on Policy Simulation Library models.2023: A year in review2023-12-28T00:00:00-06:002023-12-28T00:00:00-06:00https://blog.pslmodels.org/2023-year-in-reviewJason DeBacker2022: A year in review2022-12-31T00:00:00-06:002022-12-31T00:00:00-06:00https://blog.pslmodels.org/2022-year-in-reviewJason DeBackerDemo Day: How does targeted cash assistance affect incentives to work?2022-07-14T00:00:00-05:002022-07-14T00:00:00-05:00https://blog.pslmodels.org/demo-day-cambridge-cash-assistanceMax GhenisDemo Day: Getting Started with GitHub2022-06-28T00:00:00-05:002022-06-28T00:00:00-05:00https://blog.pslmodels.org/demo-day-githubJason DeBackerDemo Day: Analyzing tax competitiveness with Cost-of-Capital-Calculator2022-04-18T00:00:00-05:002022-04-18T00:00:00-05:00https://blog.pslmodels.org/demo-day-ccc-internationalJason DeBacker
\ No newline at end of file
diff --git a/images/MLK_Library_cut2.png b/images/MLK_Library_cut2.png
old mode 100755
new mode 100644
diff --git a/images/OG-USA_logo_long.png b/images/OG-USA_logo_long.png
old mode 100755
new mode 100644
diff --git a/images/biden_dist_fig.png b/images/biden_dist_fig.png
old mode 100755
new mode 100644
diff --git a/images/biden_revenue.png b/images/biden_revenue.png
old mode 100755
new mode 100644
diff --git a/images/biden_volcano.png b/images/biden_volcano.png
old mode 100755
new mode 100644
diff --git a/images/chart-preview.png b/images/chart-preview.png
deleted file mode 100755
index 572b325..0000000
Binary files a/images/chart-preview.png and /dev/null differ
diff --git a/images/copied_from_nb/README.md b/images/copied_from_nb/README.md
deleted file mode 100755
index 37c795c..0000000
--- a/images/copied_from_nb/README.md
+++ /dev/null
@@ -1,3 +0,0 @@
-
Warning
-
-Do not manually save images into this folder. This is used by GitHub Actions to automatically copy images. Any images you save into this folder could be deleted at build time.
\ No newline at end of file
diff --git a/images/cs-auto-deploy/build_history_dashboard.png b/images/cs-auto-deploy/build_history_dashboard.png
old mode 100755
new mode 100644
diff --git a/images/cs-auto-deploy/build_history_dashboard_progress.png b/images/cs-auto-deploy/build_history_dashboard_progress.png
old mode 100755
new mode 100644
diff --git a/images/cs-auto-deploy/build_scheduled_page.png b/images/cs-auto-deploy/build_scheduled_page.png
old mode 100755
new mode 100644
diff --git a/images/cs-auto-deploy/build_status_failed.png b/images/cs-auto-deploy/build_status_failed.png
old mode 100755
new mode 100644
diff --git a/images/cs-auto-deploy/build_status_failed_logs.png b/images/cs-auto-deploy/build_status_failed_logs.png
old mode 100755
new mode 100644
diff --git a/images/cs-auto-deploy/build_status_page_progress.png b/images/cs-auto-deploy/build_status_page_progress.png
old mode 100755
new mode 100644
diff --git a/images/cs-auto-deploy/build_status_page_success.png b/images/cs-auto-deploy/build_status_page_success.png
old mode 100755
new mode 100644
diff --git a/images/cs-auto-deploy/connect_project_page.png b/images/cs-auto-deploy/connect_project_page.png
old mode 100755
new mode 100644
diff --git a/images/cs-auto-deploy/project_dashboard.png b/images/cs-auto-deploy/project_dashboard.png
old mode 100755
new mode 100644
diff --git a/images/cs-auto-deploy/project_home_page.png b/images/cs-auto-deploy/project_home_page.png
old mode 100755
new mode 100644
diff --git a/images/cs-auto-deploy/publish_page.png b/images/cs-auto-deploy/publish_page.png
old mode 100755
new mode 100644
diff --git a/images/cs-auto-deploy/run_project_page_loading.png b/images/cs-auto-deploy/run_project_page_loading.png
old mode 100755
new mode 100644
diff --git a/images/cs-auto-deploy/run_project_page_success.png b/images/cs-auto-deploy/run_project_page_success.png
old mode 100755
new mode 100644
diff --git a/images/cs-auto-deploy/updated_build_history_page.png b/images/cs-auto-deploy/updated_build_history_page.png
old mode 100755
new mode 100644
diff --git a/images/diagram.png b/images/diagram.png
deleted file mode 100755
index 2607910..0000000
Binary files a/images/diagram.png and /dev/null differ
diff --git a/images/favicon.ico b/images/favicon.ico
deleted file mode 100755
index 88bc151..0000000
Binary files a/images/favicon.ico and /dev/null differ
diff --git a/images/tb_report1.png b/images/tb_report1.png
old mode 100755
new mode 100644
diff --git a/images/tb_report2.png b/images/tb_report2.png
old mode 100755
new mode 100644
diff --git a/images/tb_report3.png b/images/tb_report3.png
old mode 100755
new mode 100644
diff --git a/images/tb_report4.png b/images/tb_report4.png
old mode 100755
new mode 100644
diff --git a/images/tb_report5.png b/images/tb_report5.png
old mode 100755
new mode 100644
diff --git a/index.html b/index.html
old mode 100755
new mode 100644
index 1bdfa2e..8891ee2
--- a/index.html
+++ b/index.html
@@ -1,105 +1,1711 @@
-
Our mission at the Policy Simulation Library is to improve public policy by opening up models and data preparation routines for policy analysis.
-To support and showcase our diverse community of users and developers, we engage across several mediums: a monthly newsletter, a Q&A forum, (now-virtual) meetups, our Twitter feed, our YouTube channel, documentation for models in our catalog, and of course, issues and pull requests on GitHub.
-
-
Today, we’re adding a new medium: the PSL Blog.
-We’ll use this space to share major updates on our catalog, provide tutorials, and summarize events or papers that involve our models.
-
-
If you’d like to share your work on our blog, or to suggest content, drop me a line.
-To follow along, add the PSL blog’s RSS feed or subscribe to our newsletter.
-
-
Happy reading,
-
-
Max Ghenis
-
-
Editor, PSL Blog
-
-
-
\ No newline at end of file
diff --git a/listings.json b/listings.json
new file mode 100644
index 0000000..fad6afe
--- /dev/null
+++ b/listings.json
@@ -0,0 +1,34 @@
+[
+ {
+ "listing": "/index.html",
+ "items": [
+ "/posts/2023-12-28-2023-year-in-review.html",
+ "/posts/2022-12-31-2022-year-in-review.html",
+ "/posts/2022-07-14-demo-day-cambridge-cash-assistance.html",
+ "/posts/2022-06-28-demo-day-github.html",
+ "/posts/2022-04-18-demo-day-ccc-international.html",
+ "/posts/2022-04-12-demo-day-policyengine-us.html",
+ "/posts/2022-03-03-DC-workshop.html",
+ "/posts/2021-12-28-2021-year-in-review.html",
+ "/posts/2021-12-08-demo-day-synthimpute.html",
+ "/posts/2021-11-01-demo-day-og-core.html",
+ "/posts/2021-09-20-demo-day-cs-auto-deploy.html",
+ "/posts/2021-08-09-demo-day-unit-testing.html",
+ "/posts/2021-07-16-demo-day-constructing-tax-data-for-the-50-states.html",
+ "/posts/2021-06-14-demo-day-tax-brain-python-api.html",
+ "/posts/2021-05-17-demo-day-jupyter-book-deploy.html",
+ "/posts/2021-04-05-demo-day-stacked-revenue-estimates.html",
+ "/posts/2021-03-08-demo-day-cs-api-stitch.html",
+ "/posts/2021-03-02-demo-day-taxbrain-to-taxcruncher.html",
+ "/posts/2021-03-02-demo-day-contributing-psl.html",
+ "/posts/2021-01-29-demo-day-scf-microdf.html",
+ "/posts/2021-01-28-demo-day-how-to-use-og-usa.html",
+ "/posts/2020-12-23-demo-day-tax-brain.html",
+ "/posts/2020-12-23-2020-year-in-review.html",
+ "/posts/2020-12-03-demo-day-cost-of-capital-calculator.html",
+ "/posts/2020-11-23-demo-day-tax-cruncher.html",
+ "/posts/2020-11-18-demo-day-creating-reform-files.html",
+ "/posts/2020-11-06-introducing-psl-blog.html"
+ ]
+ }
+]
\ No newline at end of file
diff --git a/images/logo.png b/logo.png
similarity index 100%
rename from images/logo.png
rename to logo.png
diff --git a/page2/index.html b/page2/index.html
deleted file mode 100755
index 0ed058b..0000000
--- a/page2/index.html
+++ /dev/null
@@ -1,90 +0,0 @@
-
\ No newline at end of file
diff --git a/posts/2020-11-06-introducing-psl-blog.html b/posts/2020-11-06-introducing-psl-blog.html
new file mode 100644
index 0000000..c3af3c7
--- /dev/null
+++ b/posts/2020-11-06-introducing-psl-blog.html
@@ -0,0 +1,589 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Introducing the PSL Blog
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Introducing the PSL Blog
+
+
+ A new way to follow models in the Policy Simulation Library catalog.
+
+
+
+
announcements
+
+
+
+
+
+
+
+
+
Author
+
+
Max Ghenis
+
+
+
+
+
Published
+
+
November 6, 2020
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Our mission at the Policy Simulation Library is to improve public policy by opening up models and data preparation routines for policy analysis. To support and showcase our diverse community of users and developers, we engage across several mediums: a monthly newsletter, a Q&A forum, (now-virtual) meetups, our Twitter feed, our YouTube channel, documentation for models in our catalog, and of course, issues and pull requests on GitHub.
+
Today, we’re adding a new medium: the PSL Blog. We’ll use this space to share major updates on our catalog, provide tutorials, and summarize events or papers that involve our models.
+
If you’d like to share your work on our blog, or to suggest content, drop me a line. To follow along, add the PSL blog’s RSS feed or subscribe to our newsletter.
+ The first in Policy Simulation Library’s new live demo series describes specifying tax reforms.
+
+
+
+
demo-days
+
+
+
+
+
+
+
+
+
Author
+
+
Matt Jensen
+
+
+
+
+
Published
+
+
November 18, 2020
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Check out the video:
+
+
We will host Demo Days every two weeks until the end of the year. You can see our schedule on our events page.
+
+
Show notes:
+
I demonstrate how to build policy reform files using the Tax-Brain webapp on Compute Studio. (Useful links below.) This is an introductory lesson that ends with a cliffhanger. We don’t run the model. But we do generate an individual income and payroll tax reform file that is compatible with a range of policy simulation models and analytic tools, some designed for policy decision makers, others for taxpayers and benefits recipients interested in assessing their own circumstances.
+
Beyond individual and payroll tax analysis, the reform file can be used with models that assess pass-through and corporate taxation of businesses, as well as a variety of income benefit programs. A wide range of use cases will occupy future events.
+ How to calculate a taxpayer’s liabilities under current law and under a policy reform.
+
+
+
+
demo-days
+
individual-income-tax
+
tax-cruncher
+
+
+
+
+
+
+
+
+
Author
+
+
Peter Metz
+
+
+
+
+
Published
+
+
November 23, 2020
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
For the Demo Day on November 16, I showed how to calculate a taxpayer’s liabilities under current law and under a policy reform with Tax-Cruncher. The Tax-Cruncher web application takes two sets of inputs: a taxpayer’s demographic and financial information and the provisions of a tax reform.
+
For the first Demo Day example (3:50), we looked at how eliminating the state and local tax (SALT) deduction cap and applying payroll tax to earnings above $400,000 would affect a high earner. In particular, our hypothetical filer had $500,000 in wages, $100,000 in capital gains, and $100,000 in itemizable expenses. You can see the results at Compute Studio simulation #634.
+
For the second example (17:50), we looked at how expanding the Earned Income Tax Credit (EITC) and Child Tax Credit would impact a family with $45,000 in wages and two young children. You can see the results at Compute Studio simulation #636.
I begin by illustrating the various parameters available for the user to manipulate. These include parameters of the business and individual income tax systems, as well as parameters representing economic assumptions (e.g., inflation rates and nominal interest rates) and parameters dictating financial and accounting policy (e.g., the fraction of financing using debt). Note that all default values for tax policy parameters represent the “baseline policy”, which is defined as the current law policy in the year being analyzed (which itself is a parameter the user can change). Other parameters are estimated using historical data following the methodology of CBO (2014).
+
Next, I change a few parameters and run the model. In this example, I move the corporate income tax rate up to 28% and lower bonus depreciation for assets with depreciable lives of 20 years or less to 50%.
+
Finally, I discuss how to interpret output. The web app returns a table and three figures summarizing marginal effective total tax rates on new investments. This selection of output helps give one a sense of the the overall changes, as well as effects across asset types, industries, and type of financing. For the full model output, one can click on “Download Results”. Doing so will download four CSV files contain several measures of the impact of the tax system on investment for very fine asset and industry categories. Users can take these files and create tables and visualizations relevant to their own use case.
+
Please take the model for a spin and simulate your own reform. If you have questions, comments, or suggestions, please let me know on the PSL Discourse (non-technical questions) or by opening an issue in the CCC GitHub repository (technical questions).
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2020-12-23-2020-year-in-review.html b/posts/2020-12-23-2020-year-in-review.html
new file mode 100644
index 0000000..e7f3085
--- /dev/null
+++ b/posts/2020-12-23-2020-year-in-review.html
@@ -0,0 +1,599 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - 2020: A year in review
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
2020: A year in review
+
+
+ Highlights from the Policy Simulation Library in 2020.
+
+
+
+
psl
+
psl-foundation
+
+
+
+
+
+
+
+
+
Author
+
+
Jason DeBacker
+
+
+
+
+
Published
+
+
December 23, 2020
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
This year has been one to forget! But 2020 did have its bright spots, especially in the PSL community. This post reviews some of the highlights from the year.
+
The Library was able to welcome two new models to the catalog in 2020: microdf and OpenFisca-UK. microdf provides a number of useful tools for use with economic survey data. OpenFisca-UK builds off the OpenFisca platform, offering a microsimulation model for tax and benefit programs in the UK.
+
In addition, four new models were added to the Library as incubating projects. The ui-calculator model has received a lot of attention this year in the U.S., as it provides the capability to calculate unemployment insurance benefits across U.S. states, a major mode of delivering financial relief to individuals during the COVID crisis. PCI-Outbreak directly relates to the COVID crisis, using machine learning and natural language processing to estimate the true extent of the COVID pandemic in China. The model finds that actual COVID cases are significantly higher than what official statistics claim. The COVID-MCS model considers COVID case counts and test positivity rates to measure whether or not U.S. communities are meeting certain benchmarks in controlling the spread of the disease. On a lighter note, the Git-Tutorial project provides instruction and resources for learning to use Git and GitHub, with an emphasis on the workflow used by many projects in the PSL community.
+
The organization surrounding the Policy Simulation Library has been bolstered in two ways. First, we have formed a relationship with the Open Collective Foundation, who is now our fiscal host. This allows PSL to accept tax deductible contributions that will support the efforts of the community. Second, we’ve formed the PSL Foundation, with an initial board that includes Linda Gibbs, Glenn Hubbard, and Jason DeBacker.
+
Our outreach efforts have grown in 2020 to include the regular PSL Demo Day series and this PSL Blog. Community members have also presented work with PSL models at the PyData Global Conference, the Tax Economists Forum, AEI, the Coiled Podcast, and the Virtual Global Village Podcast. New users will also find a better experience learning how to use and contribute to PSL models as many PSL models have improved their documentation through the use of Jupyter Book (e.g., see the Tax-Calculator documentation).
+ Computing the impact of US tax reform with the Tax-Brain web-app.
+
+
+
+
demo-days
+
individual-income-tax
+
tax-brain
+
us
+
+
+
+
+
+
+
+
+
Author
+
+
Anderson Frailey
+
+
+
+
+
Published
+
+
December 23, 2020
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
For this PSL demo-day I showed how to use the Tax-Brain web-application, hosted on Compute Studio, to analyze proposed individual income tax policies. Tax-Brain integrates the Tax-Calculator and Behavioral-Responses models to make running both static and dynamic analyses of the US federal income and payroll taxes simple. The web interface for the model makes it possible for anyone to run their own analyses without writing a single line of code.
+
We started the demo by simply walking through the interface and features of the web-app before creating our own sample reform to model. This reform, which to my knowledge does not reflect any proposals currently up for debate, included changes to the income and payroll tax rates, bringing back personal exemptions, modifying the standard deduction, and implementing a universal basic income.
+
While the model ran, I explained how Tax-Brain validated all of the user inputs, the data behind the model, and how the final tax liability projections are determined. We concluded by looking through the variety of tables and graphs Tax-Brain produces and how they can easily be shared with others.
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2021-01-28-demo-day-how-to-use-og-usa.html b/posts/2021-01-28-demo-day-how-to-use-og-usa.html
new file mode 100644
index 0000000..88da324
--- /dev/null
+++ b/posts/2021-01-28-demo-day-how-to-use-og-usa.html
@@ -0,0 +1,630 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Demo Day: The OG-USA macroeconomic model of U.S. fiscal policy
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Demo Day: The OG-USA macroeconomic model of U.S. fiscal policy
+
+
+ How to model the macroeconomic effects of tax reform with a web app.
+
+
+
+
demo-days
+
OG-USA
+
Tax-Calculator
+
open-source
+
policy-simulation-library
+
compute-studio
+
us
+
+
+
+
+
+
+
+
+
Author
+
+
Richard W. Evans
+
+
+
+
+
Published
+
+
January 28, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
In this PSL Demo Day, I demonstrate how to use the open source OG-USA macroeconomic model of U.S. fiscal policy. Jason DeBacker and I (Richard Evans) have been the core maintainers of this project and repository for the last six years. This demo is organized into the following sections. The YouTube webinar linked above took place on January 11, 2021.
The Policy Simulation Library is a decentralized organization of open source policy models. The Policy Simulation Library GitHub organization houses many open source repositories, each of which represents a curated policy project by a diverse group of maintainers. The projects that have met the highest standards of best practices and documentation are designated as psl-cataloged , while newer projects that are in earlier stages are designated as psl-incubating . The philosophy and goal of the PSL environment is to make policy modeling open and transparent. It also allows more collaboration and cross-project contributions and interactions.
+
The Policy Simulation Library group has been holding these PSL Demo Day webinars since the end of 2020. The video of each webinar is available on the Policy Simulation Library YouTube channel. These videos are a great resource for learning the different models available in the PSL community, how the models interact, how to contribute to them, and what is on the horizon in their development. Also excellent in many of the PSL Demo Day webinars is a demonstration of how to use the models on the Compute Studio web application platform.
+
I have been a participant in and contributor to the PSL community since its inception. I love economic policy modeling. And I learned how sophisticated and complicated economic policy models can be. And any simulation can have hundreds of underlying assumptions, some of which may not be explicitly transparent. I think models that are used for public policy analysis have a philosophical imperative to be open source. This allows others to verify results and test sensitivity to assumptions.
+
Another strong benefit of open source modeling is that it is fundamentally apolitical. With proprietary closed-source policy models, an outside observer might criticize the results of the model based on the perceived political biases of the modeler or the sponsoring organization. With open-source models, a critic can be redirected to the underlying assumptions, structure, and content of the model. This is constructive criticism and debate that moves the science foreward. In the current polarized political environment in the U.S., open-source modeling can provide a constructive route for bipartisan cooperation and the democratization of computational modeling. Furthermore, open-source modeling and workflow encourages the widest forms of collaboration and contributions.
+
+
+
Description of OG-USA model
+
OG-USA is an open-source overlapping generations, dynamic general equilibrium, heterogeneous agent, macroeconomic model of U.S. fiscal policy. The GitHub repository for the OG-USA source code is github.com/PSLmodels/OG-USA. This repository contains all the source code and instructions for loading and running OG-USA and all of its dependencies on your local machine. We will probably do another PSL Demo Day on how to run OG-USA locally. This Demo Day webinar is about running OG-USA on the Compute Studio web application. See Section “Using OG-USA on Compute.Studio” below.
+
As a heterogeneous agent macroeconomic model, OG-USA allows for distributional analyses at the individual and firm level. That is, you can simulate the model and answer questions like, “How will an increase in the top three personal income tax rates affect people of different ages and income levels?” Microsimulation models can answer these types of distributional analysis questions as well. However, the difference between a macroeconomic model and a microsimulation model is that the macroeconomic models can simulate how each of those individuals and firms will respond to a policy change (e.g., lower labor supply or increased investment demand) and how those behavioral responses will add up and feed back into the macroeconomy (e.g., the effect on GDP, government revenue, government debt, interest rates, and wages).
+
OG-USA is a large-scale model and comprises tens of thousands of lines of code. The status of all of this code being publicly available on the internet with all collaboration and updates also public makes this an open source project. However, it is not enough to simply post one’s code. We have gone to great lengths to make in-line comments or “docstring” in the code to clarify the purpose of each function and line of code. For example, look in the OG-USA/ogusa/household.py module. The first function on line 18 is the marg_ut_cons() function. As is described in its docstring, its purpose is to “Compute the marginal utility of consumption.”
+
These in-code docstrings are not enough. We have also created textbook style OG-USA documentation at pslmodels.github.io/OG-USA/ using the Jupyter Book medium. This form of documentation has the advantage of being in book form and available online. It allows us to update the documentation in the open-source repository so changes and versions can be tracked. It describes the OG-USA API, OG-USA theory, and `OG-USA calibration. As with the model, this documentation is always a work in progress. But being open-source allows outside contributors to help with its updated and error checking.
+
One particular strength of the OG-USA model I want to highlight is its interaction with microsimulation models to incorporate information about tax incentives faced by the heterogeneous households in the model. We have interfaced OG-USA with microsimulation models in India and in the European Commission. OG-USA ’s default for modeling the United States is to use the open-source Tax-Calculator microsimulation model, which was described by Anderson Frailey in the last Demo Day of 2020. However, DeBacker and I currently have a project in which we use OG-USA to simulate policies using the Tax Policy Center’s microsimulation model. The way OG-USA interfaces with microsimulation models to incorporate rich tax data is described in the documentation in the calibration chapter entitled, “Tax Functions”.
+
+
+
Using OG-USA on Compute Studio
+
In the demonstration, I focus on how to run experiments and simulations with OG-USA using the Compute Studio web application platform rather than installing running the model on your local machine. To use OG-USA on this web application, you will need a Compute Studio account. Once you have an account, you can start running any model available through the site. For some models, you will have to pay for the compute time, although the cost of running these models is very modest. However, all Compute Studio simulations of the OG-USA model are currently sponsored by the Open Source Economics Laboratory. This subsidy will probably run out in the next year. But we are always looking for funding for these models.
+
Once you are signed up and logged in to your Compute Studio account, you can go to the OG-USA model on Compute Studio at compute.studio/PSLmodels/OG-USA. The experiment that we simulated in the demonstration is available at compute.studio/PSLmodels/OG-USA/206. The description at the top of the simulation page describes the changes we made. You can look through the input page by clicking on the “Inputs” tab. We ran the model by clicking the green “Run” button at the lower left of the page. The model took about 5 hours to run, so I pre-computed the results that we discussed in the demo. The outputs of the experiment are available in the “Outputs” tab on the page. I also demonstrated how one can click the “Download Results” button at the bottom of the “Outputs” tab to download more results from the simulation. However, the full set of results is only available by installing and running the OG-USA model simulation on your local machine.
+
The benefits of the Compute Studio web application are that running the OG-USA model is much easier for the non-expert, and the multiple-hour computation time can be completed on a remote machine in the cloud.
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2021-01-29-demo-day-scf-microdf.html b/posts/2021-01-29-demo-day-scf-microdf.html
new file mode 100644
index 0000000..e1d12a7
--- /dev/null
+++ b/posts/2021-01-29-demo-day-scf-microdf.html
@@ -0,0 +1,600 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Demo Day: Running the scf and microdf Python packages in Google Colab
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Demo Day: Running the scf and microdf Python packages in Google Colab
+
+
+ Analyzing US wealth data in a web-based Python notebook.
+
+
+
+
demo-days
+
microdf
+
scf
+
+
+
+
+
+
+
+
+
Author
+
+
Max Ghenis
+
+
+
+
+
Published
+
+
January 29, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
For Monday’s PSL Demo Day, I showed how to use the scf and microdf PSL Python packages from the Google Colab web-based Jupyter notebook interface.
+
The scf package extracts data from the Federal Reserve’s Survey of Consumer Finances, the canonical source of US wealth microdata. scf has a single function: load(years, columns) , which then returns a pandasDataFrame with the specified column(s), each record’s survey weight, and the year (when multiple years are requested).
+
The microdf package analyzes survey microdata, such as that returned by the scf.load function. It offers a consistent paradigm for calculating statistics like means, medians, sums, and inequality statistics like the Gini index. Most functions are structured as follows: f(df, col, w, groupby) where df is a pandasDataFrame of survey microdata, col is a column(s) name to be summarized, w is the weight column, and groupby is the column(s) to group records in before summarizing.
+
Using Google Colab, I showed how to use these packages to quickly calculate mean, median, and total wealth from the SCF data, without having to install any software or leave the browser. I also demonstrated how to use the groupby argument of microdf functions to show how different measures of wealth inequality have changed over time. Finally, I previewed some of what’s to come with scf and microdf : imputations, extrapolations, inflation, visualization, and documentation, to name a few priorities.
+ How to help software projects in the Policy Simulation Library.
+
+
+
+
demo-days
+
PSL
+
git
+
github
+
+
+
+
+
+
+
+
+
Author
+
+
Jason DeBacker
+
+
+
+
+
Published
+
+
March 2, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
In the most recent PSL Demo Day, I illustrate how to contribute to PSL projects. The open source nature of projects in the PSL catalog allows anyone to contribute. The modularity of the code, coupled with robust testing, means that one can bite off small pieces that help improve the models and remain confident those changes work as expected.
+
To begin the process of finding where to contribute to PSL projects, I advise looking through the PSL GitHub Organization to see what projects interest you. Once a project of interest is identified, looking over the open “Issues” can provide a sense of where model maintainers and users are looking for help (see especially the “Help Wanted” tags). It is also completely appropriate to create a new Issue to express interest in helping and ask for direction on where that might best be done given your experience and preferences.
+
When you are ready to begin to contribute to a project, you’ll want to fork and clone the GitHub repository to help you get the files on your local machine and ready for you to work with. Many PSL projects outline the detailed steps to get you up and running. For example, see the Tax-Calculator Contributor Guide, which outlines the step-by-step process for doing this and confirming that everything works as expected on your computer.
+
After you are set up and ready to begin modifying source code for the PSL project(s) you’re interested in contributing to, you can reference the PSL-incubating Git-Tutorial project that provides more details on the Git workflow followed by most PSL projects.
+
As you contribute, you may want to get more involved in the community. A couple ways to do this are to join any of the PSL community events, all of which are open to the public, and to post to the PSL Discourse Forums. These are great places to meet community members and ask questions about how and where to best contribute.
+
I hope this helps you get started as a PSL contributor – we look forward to getting you involved in making policy analysis better and more transparent!
Demo Day: Moving policy reform files from Tax-Brain to Tax-Cruncher
+
+
+ How to move reforms between a tax-unit-level and society-wide model with the Compute.Studio API.
+
+
+
+
demo-days
+
+
+
+
+
+
+
+
+
Author
+
+
Matt Jensen
+
+
+
+
+
Published
+
+
March 2, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Check out the video:
+
+
+
Show notes:
+
I demonstrate how to move a policy reform file from Tax-Brain to Tax-Cruncher using the Compute.Studio API. See the Demo C/S simulation linked below for text instructions that accompany the video.
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2021-03-08-demo-day-cs-api-stitch.html b/posts/2021-03-08-demo-day-cs-api-stitch.html
new file mode 100644
index 0000000..5f6f30f
--- /dev/null
+++ b/posts/2021-03-08-demo-day-cs-api-stitch.html
@@ -0,0 +1,627 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Demo Day: Stitching together apps on Compute Studio
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Demo Day: Stitching together apps on Compute Studio
+
+
+ Creating an app with the Compute Studio API.
+
+
+
+
demo-days
+
PSL
+
+
+
+
+
+
+
+
+
Author
+
+
Hank Doupe
+
+
+
+
+
Published
+
+
March 8, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
In Demo Day 8, I talked about connecting multiple apps on Compute Studio with PSL Stitch. The source code for PSL stitch can be found in this repository.
+
Stitch is composed of three components:
+
+
A python package that can be run like a normal Python package.
+
A RESTful API built with FastAPI that is called remotely to create simulations on Compute Studio.
+
A GUI built with ReactJS that makes calls to the REST API to create and monitor simulations.
+
+
One of the cool things about this app is that it uses ParamTools to read the JSON files under the hood. This means that it can read links to data in other Compute Studio runs, files on GitHub, or just plain JSON. Here are some example parameters:
business tax parameters: {"CIT_rate": [{"value": 0.25, "year": 2021}]}
+
+
After clicking run, three simulations will be created on Compute Studio and the app will update as soon as the simulations have finished:
+
+
+
+
+
+
+
Once they are done, the simulations are best viewed and interacted with on Compute Studio, but you can still inspect the JSON response from the Compute Studio API:
+
+
+
+
I created this app to show that it’s possible to build apps on top of the Compute Studio API. I think PSL Stitch is a neat example of how to do this, but I am even more excited to see what others build next.
+
Also, this is an open source project and has lots of room for improvement. If you are interested in learning web technologies related to REST APIs and frontend development with JavaScript, then this project could be a good place to start!
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2021-04-05-demo-day-stacked-revenue-estimates.html b/posts/2021-04-05-demo-day-stacked-revenue-estimates.html
new file mode 100644
index 0000000..98774e3
--- /dev/null
+++ b/posts/2021-04-05-demo-day-stacked-revenue-estimates.html
@@ -0,0 +1,603 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Demo Day: Producing stacked revenue estimates with the Tax-Calculator Python API
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Demo Day: Producing stacked revenue estimates with the Tax-Calculator Python API
+
+
+ How to evaluate the cumulative effects of a multi-part tax reform.
+
+
+
+
demo-days
+
individual-income-tax
+
tax-brain
+
tax-calculator
+
+
+
+
+
+
+
+
+
Author
+
+
Jason DeBacker
+
+
+
+
+
Published
+
+
April 5, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
It’s often useful to be able to identify the effects of specific provisions individually and not just the overall impact of a proposal with many provisions. Indeed, when revenue estimates of tax law changes are reported (such as this JCT analysis of the American Rescue Plan Act of 2021), they are typically reported on a provision-by-provision basis. Finding the provision-by-provision revenue estimates is cumbersome with the Tax-Brain web application both because it’s hard to iterate over many provisions and because the order matters when stacking estimates, so that one needs to keep this order in mind as parameter values are updated for each additional provision in a full proposal.
+
In the PSL Demo Day on April 5, 2021, I show how to use the Python API of Tax-Calculator to efficiently produce stacked revenue estimates. In fact, after some initial setup, this can be done with just 12 lines of code (plus a few more to make the output look nice). The Google Colab notebook that illustrates this approach can be found at this link, but here I’ll walk through the four steps that are involved:
+
+
Divide up the full proposal into strings of JSON text that contain each provision you want to analyze. My example breaks up the Biden 2020 campaign proposal into seven provisions, but this is illustrative and you can make more or less provisions depending on the detail you would like to see.
+
Create a dictionary that contains, as its values, the JSON strings. A couple notes on this. First, the dictionary keys should be descriptive of the provisions as they will become the labels for each provision in the final table of revenue estimates we produce. Second, order matters here. You’ll want to be sure the current law baseline is first (the value for this will be an empty dictionary). Then you specify the provisions. The order you specify will likely affect your revenue estimates from a given provision (for instance, expanding/restricting a deduction has a larger revenue effect when rates are higher), but there are not hard and fast rules on the “right” order. Traditionally, rate changes are stacked first and tax expenditures later in the order.
+
Iterate over this dictionary. With a dictionary of provisions in hand, we can write a “for loop” to iterate over the provision, simulating the Tax-Calculator model at each step. Note that when the Policy class object in Tax-Calculator is modified, it only needs to be told the changes in tax law parameters relative to its current state. In other words, when we are stacking provisions, estimating the incremental effect of each, you can think of the Policy object having a baseline policy that is represented by the current law baseline plus all provisions that have been analyzed before the provision at the current iteration. The Policy class was created in this way so that one can easily represent policy changes, requiring the user to only input the set of parameters that are modified, not every single parameter’s value under the hypothetical policy. But this also makes it parsimonious to stack provisions as we are doing here. Notice that the JSON strings for each provision (created in Step 1) can be specified independent of the stacking order. We only needed to slice the full set of proposals into discrete chunks, we didn’t need to worry about creating specifications of cumulative policy changes.
+
Format output for presentation. After we’ve run a Tax-Calculator simulation for the current law baseline plus each provision (and each year in the budget window), we’ve got all the output we need. With this output, we can quickly create a table that will nicely present our stacked revenue estimate. One good check to do here is to create totals across all provisions and compare this to the simulated revenue effects of running the full set of proposals in one go. This check helps to ensure that you didn’t make an error in specifying your JSON strings. For example, it’s easy to leave out one or more provisions, especially if there are many.
+
+
I hope this provides a helpful template for your own analysis. Note that one can modify this code in several useful ways. For example, within the for-loops, the Behavioral-Responses can be called to produce revenue estimates that take into account behavioral feedback. Or one could store the individual income tax and payroll tax revenue impacts separately (rather than return the combined values as in the example notebook). Additional outputs (even the full set of microdata after each provision is applied) can be stored for even more analysis.
+
In the future, look for Tax-Brain to add stacked revenue estimates to its capabilities. It’ll still be important for users to carve up their full list of policy changes into sets of provisions as we did in Steps 1 and 2 above, but Tax-Brain will then take care of the rest behind the scenes.
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2021-05-17-demo-day-jupyter-book-deploy.html b/posts/2021-05-17-demo-day-jupyter-book-deploy.html
new file mode 100644
index 0000000..a84f7c0
--- /dev/null
+++ b/posts/2021-05-17-demo-day-jupyter-book-deploy.html
@@ -0,0 +1,634 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Demo Day: Updating Jupyter Book documentation with GitHub Actions
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Demo Day: Updating Jupyter Book documentation with GitHub Actions
+
+
+ How to keep interactive programmatic notebook-based documentation up-to-date in your pull request workflow.
+
+
+
+
demo-days
+
jupyter-book
+
GH-actions
+
documentation
+
+
+
+
+
+
+
+
+
Author
+
+
Jason DeBacker
+
+
+
+
+
Published
+
+
May 17, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Open source projects must maintain clear and up-to-date documentation in order to attract users and contributors. Because of this, PSL sets minimum standards for documentation among cataloged projects in its model criteria. A recent innovation in executable books, Jupyter Book, has provided an excellent format for model documentation and has been widely adopted by PSL projects (see for example OG-USA, Tax-Brain, Tax-Calculator).
+
Jupyter Book allows one to write documents with executable code and text together, as in Jupyter notebooks. But Jupyter Book pushes this further by allowing documents with multiple sections, better integration of TeX for symbols and equations, BibTex style references and citations, links between sections, and Sphinx integration (for auto-built documentation of model APIs from source code). Importantly for sharing documentation, Jupyter Books can easily be compiled to HTML, PDF, or other formats. Portions of a Jupyter Book that contain executable code can be downloaded as Jupyter Notebooks or opened in Google Colab or binder
+
The Jupyter Book documentation is excellent and will help you get started creating your “book” (tip: pay close attention to formatting details, including proper whitespace). What I do here is outline how you can easily deploy your documentation to the web and keep it up-to-date with your project.
+
I start from the assumption that you have the source files to build your Jupyter Book checked into the main branch of your project (these maybe yml , md , rst , ipynb or other files). For version control purposes and to keep your repo trim, you generally don’t want to check the built documentation files to this branch (tip: consider adding the folder these files will go to (e.g., /_build to your .gitignore ). When these files are in place and you can successfully build your book locally, it’s time for the first step.
+
Step 1: Add two GH Actions to your project’s workflow: 1. An action to check that your documentation files build without an error. I like to run this on each push to a PR. The action won’t hang on warnings, but will fail if your Jupyter Book doesn’t build at all. An example of this action from the OG-USA repo is here:
+
+name: Check that docs build
+on: [push, pull_request]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - name: Checkout
+ uses: actions/checkout@v2 # If you're using actions/checkout@v2 you must set persist-credentials to false in most cases for the deployment to work correctly.
+ with:
+ persist-credentials: false
+
+ - name: Setup Miniconda
+ uses: conda-incubator/setup-miniconda@v2
+ with:
+ activate-environment: ogusa-dev
+ environment-file: environment.yml
+ python-version: 3.7
+ auto-activate-base: false
+
+ - name: Build # Build Jupyter Book
+ shell: bash -l {0}
+ run: |
+ pip install jupyter-book
+ pip install sphinxcontrib-bibtex==1.0.0
+ pip install -e .
+ cd docs
+ jb build ./book
+
To use this in your repo, you’ll just need to change a few settings such as the name of the environment and perhaps the Python version and path to the book source files. Note that in the above yml file sphinxcontrib-bibtex is pinned. You maybe able to unpin this, but OG-USA needed this pin for documentation to compile property due to changes in the jupyter-book and sphinxcontrib-bibtex packages.
+
+
An action that builds and deploys the Jupyter Book to GH Pages. The OG-USA project uses the deploy action from James Ives in this action. This is something that you will want to run when PRs are merged into your main branch so that the documentation is kept up-to-date with the project. To modify this action for your repo, you’ll need to change the repo name, the environment name, and potentially the Python version, branch name, and path to the book source files.
+
+
Step 2: Once the action in (2) above is run, your compiled Jupyter Book docs will be pushed to a gh-pages branch in your repository (the action will create this branch for you if it doesn’t already exist). At this point, you should be able to see your docs at the url https://GH_org_name.github.io/Repo_name . But it probably won’t look very good until you complete this next step. To have your Jupyter Book render on the web as you see it on your machine, you will want to push and merge an empty file with the name .nojekyll into your repo’s gh-pages branch.
+
That’s it! With these actions, you’ll be sure that your book continues to compile and a new version will be published to the web with with each merge to your main branch, ensuring that your documentation stays up-to-date.
+
Some additional tips:
+
+
Use Sphinx to document your projects API. By doing so you’ll automate an important part of your project’s documentation – as long as the docstrings are updated when the source code is, the Jupyter Book you are publishing to the web will be kept in sync with no additional work needed.
+
You can have your gh-pages-hosted documentation point to a custom URL.
+
Project maintainers should ensure that docs are updated with PRs that are relevant (e.g., if the PR changes an the source code affecting a user interface, then documentation showing example usage should be updated) and help contributors make the necessary changes to the documentation source files.
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2021-06-14-demo-day-tax-brain-python-api.html b/posts/2021-06-14-demo-day-tax-brain-python-api.html
new file mode 100644
index 0000000..82ae0d6
--- /dev/null
+++ b/posts/2021-06-14-demo-day-tax-brain-python-api.html
@@ -0,0 +1,646 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Demo Day: Using the TaxBrain Python API
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Demo Day: Using the TaxBrain Python API
+
+
+ A programmatic interface to compute the impact of tax reform.
+
+
+
+
demo-days
+
individual-income-tax
+
tax-brain
+
tax-calculator
+
+
+
+
+
+
+
+
+
Author
+
+
Anderson Frailey
+
+
+
+
+
Published
+
+
June 14, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
The TaxBrain project was primarily created to serve as the backend of the Tax-Brain web-application. But at its core, TaxBrain is a Python package that greatly simplifies tax policy analysis. For this PSL Demo-Day, I demonstrated TaxBrain’s capabilities as a standalone package, and how to use it to produce high-level summaries of the revenue impacts of proposed tax policies. The Jupyter Notebook from the presentation can be found here.
+
TaxBrain’s Python API allows you to run a full analysis of income tax policies in just three lines of code:
Where START_YEAR and END_YEAR are the first and last years, respectively, of the analysis; use_cps is a boolean indicator that you want to use the CPS-based microdata file prepared for use with Tax-Calculator; and REFORM_POLICY is either a JSON file or Python dictionary that specifies a reform suitable for Tax-Calculator. The forthcoming release of TaxBrain will also include a feature that allows you to perform a stacked revenue analysis as well. The inspiration for this feature was presented by Jason DeBacker in a previous demo-day.
+
Once TaxBrain has been run, there are a number of methods and functions included in the package to create tables and plots to summarize the results. I used the Biden 2020 campaign proposal in the demo and the resulting figures are below. The first is a “volcano plot” that makes it easy to see the magnitude of the change in tax liability individuals across the income distribution face. Each dot represents a tax unit, and the x and y variables can be customized based on the user’s needs.
+
+
The second gives a higher-level look at how taxes change in each income bin. It breaks down what percentage of each income bin faces a tax increase or decrease, and the size of that change.
+
+
The final plot shown in the demo simply shows tax liabilities by year over the budget window.
+
+
The last feature I showed was TaxBrain’s automated reports. TaxBrain uses saved results and an included report template to write a report summarizing the findings of your simulation. The reports include tables and figures similar to what you may find in similar write ups by the Joint Committee on Taxation or Tax Policy Center including a summary of significant changes caused by the reform, and all you need is one line of code:
The above code will save a PDF copy of the report in a directory called biden along with PNG files for each of the graphs created and the raw Markdown text used for the report, which you can then edit as needed if you would like to add content to the report that is not already included. Screenshots of the default report are included below.
+
+
There are of course downsides to using TaxBrain instead of Tax-Calculator directly. Specifically, it’s more difficult, and sometimes impossible, to perform custom tasks like modeling a feature of the tax code that hasn’t been added to Tax-Calculator yet or advanced work with marginal tax rates. But for day-to-day tax modeling, the TaxBrain Python package can significantly simply any workflow.
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2021-07-16-demo-day-constructing-tax-data-for-the-50-states.html b/posts/2021-07-16-demo-day-constructing-tax-data-for-the-50-states.html
new file mode 100644
index 0000000..04746b4
--- /dev/null
+++ b/posts/2021-07-16-demo-day-constructing-tax-data-for-the-50-states.html
@@ -0,0 +1,606 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Demo Day: Constructing tax data for the 50 states
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Demo Day: Constructing tax data for the 50 states
+
+
+ A new dataset to facilitate state-level analysis of federal tax reforms.
+
+
+
+
demo-days
+
tax
+
data
+
us
+
+
+
+
+
+
+
+
+
Author
+
+
Don Boyd
+
+
+
+
+
Published
+
+
July 16, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Federal income tax reform impacts can vary dramatically across states. The cap on state and local tax deductions (SALT) is a well-known example, but other policies also have differential effects because important tax-relevant features vary across states such as the income distribution, relative importance of wage, business, and retirement income, and family size and structure. Analyzing how policy impacts vary across states requires data that faithfully represent the characteristics of the 50 states.
+
This Demo Day described a method and software for constructing state weights for microdata files that (1) come as close as possible to targets for individual states, while (2) ensuring that the state weights for each tax record sum to its national weight. The latter objective ensures that the sum of state impacts for a tax reform equals the national impact.
+
This project developed state weights for a data file with more than 200,000 microdata records. The weighted data file comes within 0.01% of desired values for more than 95% of approximately 10,000 targets.
+
The goal of the slides and video was to enable a motivated Python-skilled user of the PSL TaxData and Tax-Calculator projects to reproduce project results: 50-state weights for TaxData’s primary output, the puf.csv microdata file (based primarily on an IRS Public Use File), using early-stage open-source software developed in the project. Thus, the demo is technical and focused on nuts and bolts.
+
The methods and software can also be used to:
+
+
Create geographic-area weights for other microdata files
+
Apportion state weights to Congressional Districts or counties, if suitable targets can be developed
+
Create state-specific microdata files suitable for modeling state income taxes
+
+
The main topics covered in the slides and video are:
+
+
Creating national and state targets from IRS summary data
+
Preparing a national microdata file for state weighting
+
Approaches to constructing geographic weights
+
Running software that implements the Poisson-modeling approach used in the project
+
Measures of quality of the results
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2021-08-09-demo-day-unit-testing.html b/posts/2021-08-09-demo-day-unit-testing.html
new file mode 100644
index 0000000..576bdcc
--- /dev/null
+++ b/posts/2021-08-09-demo-day-unit-testing.html
@@ -0,0 +1,629 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Demo Day: Unit testing for open source projects
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Demo Day: Unit testing for open source projects
+
+
+ How to ensure that individual functions do what you expect.
+
+
+
+
demo-days
+
R
+
Python
+
unit-testing
+
+
+
+
+
+
+
+
+
Author
+
+
Jason DeBacker
+
+
+
+
+
Published
+
+
August 9, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Unit testing is the testing of individual units or functions of a software application. This differs from regression testing that focuses on the verification of final outputs. Instead, unit testing tests each smallest testable component of your code. This helps to more easily identify and trace errors in the code.
+
Writing unit tests is good practice, though not one that’s always followed. The biggest barrier to writing unit tests is that doing so takes time. You might wonder “why am I testing code that runs?” But there are a number benefits to writing unit tests:
+
+
It ensures that the code does what you expect it to do
+
You’ll better understand what your code is doing
+
You will reduce time tracking down bugs in your code
+
+
Often, writing unit tests will save you time in the longer run because it reduces debugging time and because it forces you to think more about what your code does, which often leads to the development of more efficient code. And for open source projects, or projects with many contributors, writing unit tests is important in reducing the likelihood that errors are introduced into your code. This is why the PSL catalog criteria requires projects to provide at least some level of unit testing.
+
In the PSL Demo Day video linked above, I illustrate how to implement unit tests in R using the testthat package. There are essentially three steps to this process:
+
+
Create a directory to put your testing script in, e.g., a folder called tests
+
Create one or more scripts that define your tests.
+
+
Each test is represented as a call of the test_that function and contain an statement that will evaluate as true or false (e.g., you may use the expect_equal function to verify that a function returns expected values given certain inputs).
+
You will want to use test in the name of these tests scripts as well as something descriptive of what is tested.
+
+
Create a script that will run your tests.
+
+
Here you’ll need to import the testthat package and you’ll need to call the script(s) you are testing to load their functions.
+
Then you’ll use the test_dir function to pass the directory in which the script(s) you created in Step 2 reside.
+
+
+
Check out the video to see examples of how each of these steps is executed. I’ve also found this blog post on unit tests with testthat to be helpful.
+
Unit testing in Python seems to be more developed and straightforward with the excellent pytest package. While pytest offers many options for parameterizing tests, running tests in parallel, and more, the basic steps remain the same as those outlined above:
+
+
Create a directory for your test modules (call this folder tests as pytest will look for that name).
+
Create test modules that define each test
+
+
Tests are defined much like any other function in Python, the but will involve some assertion statement is triggered upon test failure.
+
You will want to use test in the name of these tests modules as well as something descriptive of what is tested.
+
+
You won’t need to create a script to run your tests as with testthat, but you may create a pytest.ini file to customize your tests options.
+
+
That’s about it to get started writing unit tests for your code. PSL cataloged projects provide many excellent examples of a variety of unit tests, so search them for examples to build from. In a future Demo Day and blog post, we’ll talk about continuous integration testing to help get even more out of you unit tests.
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2021-09-20-demo-day-cs-auto-deploy.html b/posts/2021-09-20-demo-day-cs-auto-deploy.html
new file mode 100644
index 0000000..87dd77e
--- /dev/null
+++ b/posts/2021-09-20-demo-day-cs-auto-deploy.html
@@ -0,0 +1,687 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Demo Day: Deploying apps on Compute Studio
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Demo Day: Deploying apps on Compute Studio
+
+
+ How to deploy apps on Compute Studio using the new automated deployments feature.
+
+
+
+
demo-days
+
policy-simulation-library
+
compute-studio
+
+
+
+
+
+
+
+
+
Author
+
+
Hank Doupe
+
+
+
+
+
Published
+
+
September 20, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Compute Studio (C/S) is a platform for publishing and sharing computational models and data visualizations. In this demo day, I show how to publish your own project on C/S using the new automated deployments feature. You can find an in depth guide to publishing on C/S in the developer docs.
+
C/S supports two types of projects: models and data visualizations. Models are fed some inputs and return a result. Data visualizations are web servers backed by popular open-source libraries like Bokeh, Dash, or Streamlit. Models are good for long-running processes and producing archivable results that can be shared and returned to easily. Data visualizations are good for highly interactive and custom user experiences.
+
Now that you’ve checked out the developer docs and set up your model or data-viz, you can head over to the C/S publishing page https://compute.studio/new/ to publish your project. Note that this page is still very much under construction and may look different in a few weeks.
+
+
+
+
Next, you will be sent to the second stage in the publish flow where you will provide more details on how to connect your project on C/S:
+
+
+
+
Clicking “Connect App” will take you to the project home page:
+
+
+
+
Go to the “Settings” button in the top-right corner and this will take you to the project dashboard where you can modify everything from the social preview of your project to the amount of compute resources it needs:
+
+
+
+
The “Builds” link in the sidebar will take you to the builds dashboard where you can create your first build:
+
+
+
+
It’s time to create the first build. You can do so by clicking “New Build”. This will take you to the build status page. While the build is being scheduled, the page will look like this:
+
+
+
+
You can click the “Build History” link and it will show that the build has been started:
+
+
+
+
The build status page should be updated at this point and will look something like this:
+
+
+
+
C/S automated deployments are built on top of Github Actions. Unfortunately, the logs in Github Actions are not available through the Github API until after the workflow is completely finished. The build status dashboard will update as the build progresses and once it’s done, you will have full access to the logs from the build. These will contain outputs from installing your project and the outputs from your project’s tests.
+
In this case, the build failed. We can inspect the logs to see that an import error caused the failure:
+
+
+
+
+
+
+
I pushed an update to my fork of Tax-Cruncher on Github and restarted the build by clicking “Failure. Start new Build”. The next build succeeded and we can click “Release” to publish the project:
+
+
+
+
The builds dashboard now shows the two builds:
+
+
+
+
Finally, let’s go run our new model:
+
+
+
+
It may take a few seconds for the page to load. This is because the model code and all of its dependencies are being loaded onto the C/S servers for the first time:
+
+
+
+
The steps for publishing a data visualization are very similar. The main idea is that you tell C/S what Python file your app lives in and C/S will know how to run it given your data visualization technology choice.
+ A Python platform for building country-specific overlapping generations general equilibrium models.
+
+
+
+
demo-days
+
python
+
macroeconomics
+
overlapping-generations
+
+
+
+
+
+
+
+
+
Author
+
+
Jason DeBacker
+
+
+
+
+
Published
+
+
November 1, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
The OG-Core model is a general equilibrium, overlapping generations (OG) model suitable for evaluating fiscal policy. Since the work of Alan Auerbach and Laurence Kotlikoff in the 1980s, this class of model has become a standard in the macroeconomic analysis of tax and spending policy. This is for good reason. OG models are able to capture the impacts of taxes and spending in the short and long run, examine incidence of policy across generations of people (not just short run or steady state analysis of a cross-section of the economy), and capture important economic dynamics (e.g., crowding out effects of deficit-financed policy).
+
In the PSL Demo Day presentation linked above, I cover the basics of OG-Core: its history, its API, and how country-specific models can use OG-Core as a dependency. In brief, OG-Core provides a general overlapping generations framework, from which parameters can be calibrated to represent particular economies. Think of it this way: an economic model is just a set of parameters plus a system of equations. OG-Core spells out all of the equations to represent an economy with heterogeneous agents, production and government sectors, open economy options, and detailed policy rules. OG-Core also includes default values for all parameters, along with parameter metadata and parameter validation rules. A country specific application is then just a particular parameterization of the general OG-Core model.
+
As an example of a country-specific application, one can look at the OG-USA model. This model provides a calibration of OG-Core to the United States. The source code in that project allows one to go from raw data sources to the estimation and calibration procedures used to determine parameter values representing the United States, to parameter values in formats suitable for use in OG-Core. Country-specific models like OG-USA include (where available) links to microsimulation models of tax and spending programs to allow detailed microdata of actual and counterfactual policies to inform the net tax-transfer functions used in the OG-Core model. For those interested in building their own country-specific model, the OG-USA project provides a good example to work from.
+
We encourage you to take a look at OG-Core and related projects. New contributions and applications are always welcome. If you have questions or comments, reach out through the relevant repositories on Github to me, @jdebacker, or Rick Evans, @rickecon.
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2021-12-08-demo-day-synthimpute.html b/posts/2021-12-08-demo-day-synthimpute.html
new file mode 100644
index 0000000..474073a
--- /dev/null
+++ b/posts/2021-12-08-demo-day-synthimpute.html
@@ -0,0 +1,607 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Demo Day: Using synthimpute for data fusion
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Demo Day: Using synthimpute for data fusion
+
+
+ The synthimpute Python package fuses and synthesizes economic datasets with machine learning.
+
+
+
+
demo-days
+
python
+
data-fusion
+
synthimpute
+
+
+
+
+
+
+
+
+
Author
+
+
Max Ghenis
+
+
+
+
+
Published
+
+
December 8, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Suppose a policy analyst sought to estimate the impact of a policy that changed income tax rates and benefit rules while also adding a progressive wealth tax. The standard approach is to use a microsimulation model, where the rules are programmed as code, and then to run that program over a representative sample of households. Unfortunately, no single US government survey captures all the households characteristics needed to analyze this policy; in particular, the reliable tax and benefit information lies in surveys like the Current Population Survey (CPS), while wealth lies in the Survey of Consumer Finances (SCF).
+
Assuming the analyst wanted to start with the CPS, they have several options to estimate wealth for households to levy the progressive wealth tax. Two typical approaches include:
+
+
Linear regression, predicting wealth from other household characteristics common to the CPS and SCF.
+
Matching, in which each CPS household is matched with the most similar household in the SCF.
+
+
Neither of these approaches, however, aim to estimate the distribution of wealth conditional on other characteristics. Linear regression explicitly estimates the mean prediction, but that could miss the tails of wealth from whom most of the wealth tax revenue will be collected.
+
Instead, the analyst could apply quantile regression to estimate the distribution of wealth conditional on other characteristics, and then measure the effectiveness of the estimation using quantile loss.
+
In this Demo Day, I present the concepts of microsimulation, imputation, and quantile loss to motivate the synthimpute Python package I’ve developed with my PolicyEngine colleague Nikhil Woodruff. In an experiment predicting wealth on a holdout set from the SCF, my former colleague Deepak Singh and I found that random forests significantly outperform OLS and matching for quantile regression, and this is the approach applied in synthimpute for both data fusion and data synthesis. The synthimpute API will be familiar to users of scikit-learn and statsmodels , with the difference being that synthimpute ’s rf_impute function returns a random value from the predicted distribution; it can also skew the predictions to meet a target total.
+
We’ve used synthimpute to fuse data for research reports at the UBI Center and to enhance the PolicyEngine web app for UK tax and benefit simulation, and we welcome new users and contributors. Feel free to explore the repository or contact me with questions at max@policyengine.org.
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2021-12-28-2021-year-in-review.html b/posts/2021-12-28-2021-year-in-review.html
new file mode 100644
index 0000000..60b9b98
--- /dev/null
+++ b/posts/2021-12-28-2021-year-in-review.html
@@ -0,0 +1,602 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - 2021: A year in review
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
2021: A year in review
+
+
+ Highlights from the Policy Simulation Library in 2021.
+
+
+
+
psl
+
psl-foundation
+
+
+
+
+
+
+
+
+
Author
+
+
Jason DeBacker
+
+
+
+
+
Published
+
+
December 28, 2021
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
As 2021 winds down, I wanted to take a few minutes to reflect on the Policy Simulation Library’s efforts over the past year. With an amazing community of contributors, supporters, and users, PSL has been able to make a real impact in 2021.
+
The library saw two new projects achieve “cataloged” status: Tax Foundation’s Capital Cost Recovery model and the Federal Reserve Bank of New York’s DSGE.jl model. Both models satisfy all the the PSL criteria for transparency and reproducibility. Both are also written entirely in open source software: the Capital Cost Recovery model is in R and the DSGE model in Julia.
+
An exciting new project to join the Library this year is PolicyEngine. PolicyEngine is building open source tax and benefit mircosimulation models and very user-friendly interfaces to these models. The goal of this project is to take policy analysis to the masses through intuitive web and mobile interfaces for policy models. The UK version of the PolicyEngine app has already seen use from politicians interested in reforming the tax and benefit system in the UK.
+
Another excellent new addition to the library is the Federal-State Tax Project. This project provides data imputation tools to allow for state tax data that are representative of each state as well as federal totals. These datasets can then be used in microsimulation models, such as Tax-Calculator to study the impact of federal tax laws across the states. Matt Jensen and Don Boyd have published several pieces with these tools, including in State Tax Notes
+
PSL Foundation became an official business entity in 2021. While still awaiting a letter of determination for 501(c)(3) status from the IRS, PSL Foundation was able to raise more than $25,000 in the last few months of 2021 to support open source policy analysis!
+
PSL community members continued to interact several times each week in our public calls. The PSL Shop was launched in 2021 so that anyone can get themselves some PSL swag (with some of each purchase going back to the PSL Foundation to support the Library). In addition, PSL hosted 20 Demo Day presentations from 11 different presenters! These short talks covered everything from new projects to interesting applications of some of the first projects to join the Library, as well as general open source tools.
+
As in past years, PSL cataloged and incubating models were found to be of great use in current policy debates. Whether it was the ARPA, Biden administration proposals to expand the CTC, or California’s Basic Income Bill, the accessibility and ability to reproduce results from these open source projects has made them a boon to policy analysts.
+
We are looking forward to a great 2022! We expect the Library to continue to grow, foresee many interesting and helpful Demo Days, and are planning a DC PSL Workshop for March 2022. We hope to see you around these or other events!
+
Best wishes from PSL for a happy and healthy New Year!
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2022-03-03-DC-workshop.html b/posts/2022-03-03-DC-workshop.html
new file mode 100644
index 0000000..02abb52
--- /dev/null
+++ b/posts/2022-03-03-DC-workshop.html
@@ -0,0 +1,601 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Policy Simulation Library DC Workshop: Open source tools for analyzing tax policy
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Policy Simulation Library DC Workshop: Open source tools for analyzing tax policy
+
+
+ Washington, DC open-source modeling workshop, March 25, 2022, 8:30am-1:00pm, Martin Luther King, Jr. Memorial Library.
+
+
+
+
PSL
+
PSL-Foundation
+
Workshop
+
+
+
+
+
+
+
+
+
Author
+
+
Jason DeBacker
+
+
+
+
+
Published
+
+
March 3, 2022
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
The Policy Simulation Library is hosting a workshop in Washington, DC on March 25 on open source tools for the analysis of tax policy. Participants will learn how to use open source models from the Library for revenue estimation, distributional analysis, and to simulate economic impacts of tax policy. The workshop is intended to be a hands-on experience and participants can expect to leave with the required software, documentation, and knowledge to continue using these tools. All models in the workshop are written in the Python programming language–familiarity with the language is helpful, but not required.
The workshop will be held at the Martin Luther King Jr. Memorial Library in Washington, DC. Participants are expected to arrive by 8:30am and the program will conclude at 1:00pm. Breakfast and lunch will be provided. PSL Foundation is sponsoring the event and there is no cost to attend. Attendance is limited to 30 in order to make this a dynamic and interactive workshop.
+
To register, please use this Google Form. Registration will close March 11. Participants will be expected to bring a laptop to the workshop where they can interact with the software in real time with the instructors. Registered participants will receive an email before the event with a list of software to install before the workshop.
+
Please feel free to share this invitation with your colleagues.
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2022-04-12-demo-day-policyengine-us.html b/posts/2022-04-12-demo-day-policyengine-us.html
new file mode 100644
index 0000000..d2bb705
--- /dev/null
+++ b/posts/2022-04-12-demo-day-policyengine-us.html
@@ -0,0 +1,603 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Demo Day: Modeling taxes and benefits with the PolicyEngine US web app
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Demo Day: Modeling taxes and benefits with the PolicyEngine US web app
+
+
+ PolicyEngine US is a new web app for computing the impact of US tax and benefit policy.
+
+
+
+
demo-days
+
apps
+
taxes
+
benefits
+
us
+
+
+
+
+
+
+
+
+
Author
+
+
Max Ghenis
+
+
+
+
+
Published
+
+
April 12, 2022
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
PolicyEngine is a nonprofit that builds free, open-source software to compute the impact of public policy. After launching our UK app in October 2021, we’ve just launched our US app, which calculates households’ federal taxes and several benefit programs, both under current law and under customizable policy reforms.
+
In this Demo Day, I provide background on PolicyEngine and demonstrate how to use PolicyEngine US (a Policy Simulation Library cataloged model) to answer a novel policy question:
+
+
How would doubling both (a) the Child Tax Credit and (b) the Supplemental Nutrition Assistance Program (SNAP) net income limit affect a single parent in California with $1,000 monthly rent and $50 monthly broadband costs?
+
+
By bringing together tax and benefit models into a web interface, we can answer this question quickly without programming experience, as well as an unlimited array of questions like it. The result is a table breaking down the household’s net income by program, as well as graphs of net income and marginal tax rates as the household’s earnings vary.
+
I close with a quick demo of PolicyEngine UK, which adds society-wide results like the impact of reforms on the budget, poverty, and inequality, as well as contributed policy parameters. We’re planning to bring those features to PolicyEngine US, along with state tax and benefit programs in all 50 states, over the next two years (if not sooner).
+
Feel free to explore the app and reach out with any questions at max@policyengine.org.
Demo Day: Analyzing tax competitiveness with Cost-of-Capital-Calculator
+
+
+ Using Cost-of-Capital-Calculator with data on international business tax policies.
+
+
+
+
demo-days
+
cost-of-capital-calculator
+
business-taxation
+
corporate-income-tax
+
taxes
+
+
+
+
+
+
+
+
+
Author
+
+
Jason DeBacker
+
+
+
+
+
Published
+
+
April 18, 2022
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
In the Demo Day video shared here, I show how to use open source tools to analyze international corporate tax competitiveness. The two main tools illustrated are the Cost-of-Capital-Calculator (CCC), a model to compute measures of the tax burden on new investments, and Tax Foundation’s International Tax Competitiveness Index (ITCI).
+
Tax Foundation has made many helpful resources available online. Their measures of international business tax policy are a great example of this. The ICTI outputs and inputs are all well documented, with source code to reproduce results available on GitHub.
+
I plug Tax Foundation’s country-by-country data into CCC functions using it’s Python API. Because CCC is designed to flexibly take array or scalar data, operating on rows of tabular data, such as that in the ITCI, is relatively straight-forward. The Google Colab notebook I walk through in this Demo Day, can be a helpful example to follow if you’d like to do something similar to this with the Tax Foundation data - or your own data source. From the basic building blocks there (reading in data, calling CCC functions), you can extend the analysis in a number of ways. For example adding additional years of data (Tax Foundation posts their data back to 2014), modifying economic assumptions, or creating counter-factual policy experiments across sets of countries.
+
If you find this example useful, or have questions or suggestions about this type of analysis, please feel free to reach out to me.
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2022-06-28-demo-day-github.html b/posts/2022-06-28-demo-day-github.html
new file mode 100644
index 0000000..a226a54
--- /dev/null
+++ b/posts/2022-06-28-demo-day-github.html
@@ -0,0 +1,609 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - Demo Day: Getting Started with GitHub
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Demo Day: Getting Started with GitHub
+
+
+ The basics of forking and cloning repositories and working on branches.
+
+
+
+
demo-days
+
github
+
git
+
workflow
+
getting-started
+
+
+
+
+
+
+
+
+
Author
+
+
Jason DeBacker
+
+
+
+
+
Published
+
+
June 28, 2022
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Git and GitHub often present themselves as barriers to entry to would-be contributors to PSL projects, even for those who are otherwise experienced with policy modeling. But these tools are critical to collaboration on open source projects. In the Demo Day video linked above, I cover some of the basics to get set up and begin contributing to an open source project.
+
There are four steps I outline:
+
+
Create a “fork” of the repository you are interested in. A fork is a copy of the source code that resides on GitHub (i.e., in the cloud). A fork gives you control over a copy of the source code. You will be able to merge in changes to the code on this fork, even if you don’t have permissions to do so with the original repository.
+
“Clone” the fork. Cloning will download a copy of the source code from your fork onto your local machine. But cloning is more than just downloading the source code. It will include the version history of the code and automatically create a link between the local files and the remote files on your fork.
+
Configure your local files to talk to both your fork (which has a default name of origin) and the original repository you forked from (which typically has the default name of upstream). Do this by using your command prompt or terminal to navigate to the directory you just cloned. From there, run:
+
+
git remote add upstream URL_to_original_repo.git
+
And check that this worked by giving the command:
+
git remote -v
+
If things worked, you should see URLs to your fork and the upstream repository with “(fetch)” and “(push)” by them More info on this is in the Git docs.
+
+
Now that you have copies of the source code on your fork and on your local machine, you are ready to begin contributing. As you make changes to the source code, you’ll want to work on development branches. Branches are copies of the code. Ideally, you keep your “main” (or “master”) branch clean (i.e., your best version of the code) and develop the code on branches. When you’ve completed the development work (e.g., adding a new feature) you will them merge this into the “main” branch.
+
+
I hope this helps you get started contributing to open source projects. Git and GitHub are valuable tools and there is lots more to learn, but these basics will get you going. For more information, see the links below. If you want to get started working with a project in the Library, feel free to reach out to me through the relevant repo (@jdebacker on GitHub) or drop into a PSL Community Call (dates on the PSL Calendar).
Targeted programs like these are common in guaranteed income pilots, and in some enacted policies, and I find that it would cost-effectively reduce poverty: if expanded to Massachusetts, it would cost $1.2 billion per year and cut child poverty 42%.
+
However, that targeting comes at a cost. Using the OpenFisca US microsimulation model (supported by the Center for Growth and Opportunity and cataloged by the Policy Simulation Library), I find that the program would deepen an existing welfare cliff at 200% of the poverty line. For example, a family of four would lose over $19,000 total—$9,000 from the cash assistance and $10,000 from other benefits—once they earn a dollar above 200% of the poverty line (about $55,000). To recover those lost benefits, they would have to earn an additional $26,000, a range I call the “earnings dead zone”.
+
My presentation reviews these trends in both slides and the PolicyEngine US app for computing the impacts of tax and benefit policy. For example, I show how repealing the SNAP emergency allotment would smooth out welfare cliffs, while reducing resources available to low-income families, and how a universal child allowance avoids work disincentives while less cost-effectively reducing poverty.
+
Policymakers face trade-offs between equity and efficiency, and typically labor supply responses consider marginal tax rates. With their infinite marginal tax rates, welfare cliffs are a less explored area, even though they surface in several parts of the tax and benefit system. This paper makes a start, but more research is yet to be done.
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2022-12-31-2022-year-in-review.html b/posts/2022-12-31-2022-year-in-review.html
new file mode 100644
index 0000000..75d7e14
--- /dev/null
+++ b/posts/2022-12-31-2022-year-in-review.html
@@ -0,0 +1,601 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - 2022: A year in review
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
2022: A year in review
+
+
+ Highlights from the Policy Simulation Library in 2022.
+
+
+
+
psl
+
psl-foundation
+
+
+
+
+
+
+
+
+
Author
+
+
Jason DeBacker
+
+
+
+
+
Published
+
+
December 31, 2022
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
This has been another successful year for the Policy Simulation Library, whose great community of contributors continue to make innovative advances in open source policy analysis, and for the PSL Foundation, which supports the Library and its community. We are so thankful for all those who have made financial or technical contributions to the PSL this year! In this blog post, I want to take this time at the end of the year to reflect on a few of the highlights from 2022.
+
PolicyEngine, a PSL Foundation fiscally-sponsored project, launched PolicyEngine US in April and has since seen many use cases of the model (check out the PolicyEngine year-in-review here). PolicyEngine had begun by leveraging the OpenFisca platform, but has transitioned to their own-maintained PolicyEngine Core. PolicyEngine Core and their related projects (such as PolicyEngine US and PolicyEngine UK) already meet all the criteria set forth by the Policy Simulation Library. Keep an eye out for lots more excellent tax and benefit policy analysis tools from PolicyEngine in 2023 and beyond!
+
PSL Foundation has partnered with QuantEcon, acting as a fiscal sponsor for their projects that provide training and training materials for economic modeling and econometrics using open source tools. QuantEcon ran a massive open online class in India that had more than 1000 registrants in summer of 2022. They also ran an online course for over 100 students from universities in Africa in 2022. Further, with the funding received through their partnership with PSL Foundation, QuantEcon will continue these efforts in 2023 with a planned, in-person course in India.
+
PSL hosted its first in-person workshop in March. The workshop focused on open source tools for tax policy analysis including Tax-Calculator, Cost-of-Capital-Calculator, OG-USA, and PolicyEngine US. The PSL event was, appropriately enough, hosted at the MLK Memorial Library in DC. We filled the space with 30 attendees from think tanks, consultancies, and government agencies. The workshop was a great success and we look forward to hosting more in-person workshops in the future.
+
PSL’s bi-weekly Demo Day series continued throughout 2022, with 13 Demo Days this year. In these, we saw a wide array of presenters from institutions such as the Federal Reserve Bank of Atlanta, PolicyEngine, Tax Foundation, National Center for Children in Poverty, IZA Institute of Labor Economics, Channels, the University of South Carolina, the Center for Growth and Opportunity, and the American Enterprise Institute. You can go back and rewatch any of these presentations on YouTube.
+
It’s been a fantastic year and we expect even more from the community and PSL Foundation in 2023. PSL community members continue to interact several times each week on our public calls. Check out the events page and join us in the New Year!
+
From all of us at the PSL, best wishes for a happy and healthy New Year!
+
+
+
+
+
\ No newline at end of file
diff --git a/posts/2023-12-28-2023-year-in-review.html b/posts/2023-12-28-2023-year-in-review.html
new file mode 100644
index 0000000..5a6a01c
--- /dev/null
+++ b/posts/2023-12-28-2023-year-in-review.html
@@ -0,0 +1,604 @@
+
+
+
+
+
+
+
+
+
+
+
+
+Policy Simulation Library Blog - 2023: A year in review
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
2023: A year in review
+
+
+ Highlights from the Policy Simulation Library in 2023.
+
+
+
+
psl
+
psl-foundation
+
+
+
+
+
+
+
+
+
Author
+
+
Jason DeBacker
+
+
+
+
+
Published
+
+
December 28, 2023
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
While there haven’t been any blog posts in 2023 :wink:, it has been a productive year for the Policy Simulation Library (PSL) community and PSL Foundation!
+
We’ve continued to serve our mission through education and outreach efforts. We hosted 13 Demo Days in 2023, including presentations from individuals at the Congressional Budget Office, Allegheny County, NOAA, Johns Hopkins, QuantEcon, the City of New York, and other institutions. Archived videos of the Demo Days are available on our YouTube Channel.
+
In addition, we hosted an in person workshop at the National Tax Association’s annual conference in November. This event featured the PolicyEngine-US project and was lead by Max Ghenis and Nikhil Woodruff, co-founders of PolicyEngine. Attendees included individuals from the local area (Denver) and conference attendees, who represented academia, government, and think tanks. Max and Nikhil provided an overview of PolicyEngine and then walked attendees through a hands-on exercise using the PolicyEngine US tool, having them write code to generate custom plots in a Google Colab notebook. It was a lot of fun – and the pizza was decent too!
+
Speaking of PolicyEngine, this fiscally-sponsored project of PSL Foundation had a banner year in terms of fundraising and development. The group received several grants in 2023 and closed out the year with a large grant from Arnold Ventures. They also wrote an NSF grant proposal which they are waiting to hear back from. The group added an experienced nonprofit executive, Leigh Gibson, to their team. Leigh provides support with fundraising and operations, and she’s been instrumental in these efforts. In terms of software development, the PolicyEngine team has been able to greatly leverage volunteers (more than 60!) with Pavel Makarchuk coming on as Policy Modeling Manager to help coordinate these efforts. With their community, PolicyEngine has codified numerous US state tax and benefit policies and has developed a robust method to create synthetic data for use in policy analysis. Be on the lookout for a lot more from them in 2024.
+
QuantEcon, another fiscally sponsored project, has also made tremendous contributions to open source economics in 2023. Most importantly, they ran a very successful summer school in West Africa. In addition, they have continued make key contributions to software tools useful for teaching and training economics tools. These include Jupyteach, which Spencer Lyon shared in our Demo Day series. With their online materials, textbooks, and workshops around the world, QuantEcon is shaping how researchers and policy analysts employ economic tools to solve real-world problems.
+
PSL Foundation added a third fiscally sponsored project, Policy Change Index (PCI) in 2023. PCI was founded by Weifeng Zhong, a Senior Research Fellow at the Mercatus Center at George Mason University, and uses natural language processing and machine learning to predict changes in policy among autocratic regimes. PCI has had a very successful start with PCI-China, predicting policy changes in China, and PCI-Outbreak, predicting the extent of true COVID-19 case counts in China during the pandemic. Currently, they are extending their work to include predictive indices for Russia, North Korea, and Iran. PSL-F is excited for the opportunity to help support this important work.
+
Other cataloged projects have continued to be widely used in 2023. To note a few of these use cases, the United Nations has partnered with Richard Evans and Jason DeBacker, maintainers of OG-Core, to help bring the modeling platform to developing countries they are assisting. Tax Foundation’s Capital Cost Recovery model has been updated to 2023 and used in their widely cited 2023 Tax Competitiveness Index. And the Tax-Calculator and TaxData projects both continue to used by think tanks and researchers.
+
As 2023 comes to a close, we look forward to 2024. We’ll be launching a new PSLmodels.org website soon. And there’ll be many more events – we hope you join in.
+
From all of us at the PSL, best wishes for a happy and healthy New Year!
+
+
+
+
+
\ No newline at end of file
diff --git a/psl/psl-foundation/2020/12/23/2020-year-in-review/index.html b/psl/psl-foundation/2020/12/23/2020-year-in-review/index.html
new file mode 100644
index 0000000..5e2bb32
--- /dev/null
+++ b/psl/psl-foundation/2020/12/23/2020-year-in-review/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/psl/psl-foundation/2021/12/28/2021-year-in-review/index.html b/psl/psl-foundation/2021/12/28/2021-year-in-review/index.html
new file mode 100644
index 0000000..bf1693c
--- /dev/null
+++ b/psl/psl-foundation/2021/12/28/2021-year-in-review/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/psl/psl-foundation/2022/12/31/2022-year-in-review/index.html b/psl/psl-foundation/2022/12/31/2022-year-in-review/index.html
new file mode 100644
index 0000000..0fd14a3
--- /dev/null
+++ b/psl/psl-foundation/2022/12/31/2022-year-in-review/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/psl/psl-foundation/2023/12/28/2023-year-in-review/index.html b/psl/psl-foundation/2023/12/28/2023-year-in-review/index.html
new file mode 100644
index 0000000..7859fdc
--- /dev/null
+++ b/psl/psl-foundation/2023/12/28/2023-year-in-review/index.html
@@ -0,0 +1,14 @@
+
+
+ Redirect
+
+
+
+
+
diff --git a/robots.txt b/robots.txt
old mode 100755
new mode 100644
index 3d7106f..b2b4185
--- a/robots.txt
+++ b/robots.txt
@@ -1 +1 @@
-Sitemap: https://blog.pslmodels.org/sitemap.xml
+Sitemap: https://PSLmodels.github.io/blog/sitemap.xml
diff --git a/search.json b/search.json
new file mode 100644
index 0000000..4b2eeb3
--- /dev/null
+++ b/search.json
@@ -0,0 +1,233 @@
+[
+ {
+ "objectID": "posts/2021-03-02-demo-day-contributing-psl.html",
+ "href": "posts/2021-03-02-demo-day-contributing-psl.html",
+ "title": "Demo Day: Contributing to PSL projects",
+ "section": "",
+ "text": "In the most recent PSL Demo Day, I illustrate how to contribute to PSL projects. The open source nature of projects in the PSL catalog allows anyone to contribute. The modularity of the code, coupled with robust testing, means that one can bite off small pieces that help improve the models and remain confident those changes work as expected.\nTo begin the process of finding where to contribute to PSL projects, I advise looking through the PSL GitHub Organization to see what projects interest you. Once a project of interest is identified, looking over the open “Issues” can provide a sense of where model maintainers and users are looking for help (see especially the “Help Wanted” tags). It is also completely appropriate to create a new Issue to express interest in helping and ask for direction on where that might best be done given your experience and preferences.\nWhen you are ready to begin to contribute to a project, you’ll want to fork and clone the GitHub repository to help you get the files on your local machine and ready for you to work with. Many PSL projects outline the detailed steps to get you up and running. For example, see the Tax-Calculator Contributor Guide, which outlines the step-by-step process for doing this and confirming that everything works as expected on your computer.\nAfter you are set up and ready to begin modifying source code for the PSL project(s) you’re interested in contributing to, you can reference the PSL-incubating Git-Tutorial project that provides more details on the Git workflow followed by most PSL projects.\nAs you contribute, you may want to get more involved in the community. A couple ways to do this are to join any of the PSL community events, all of which are open to the public, and to post to the PSL Discourse Forums. These are great places to meet community members and ask questions about how and where to best contribute.\nI hope this helps you get started as a PSL contributor – we look forward to getting you involved in making policy analysis better and more transparent!\nResources:\n\nPSL Git-Tutorial\nPSL community events\nPSL Discourse Forums\nTax-Calculator Contributor Guide\nPSL GitHub Organization"
+ },
+ {
+ "objectID": "posts/2021-09-20-demo-day-cs-auto-deploy.html",
+ "href": "posts/2021-09-20-demo-day-cs-auto-deploy.html",
+ "title": "Demo Day: Deploying apps on Compute Studio",
+ "section": "",
+ "text": "Compute Studio (C/S) is a platform for publishing and sharing computational models and data visualizations. In this demo day, I show how to publish your own project on C/S using the new automated deployments feature. You can find an in depth guide to publishing on C/S in the developer docs.\nC/S supports two types of projects: models and data visualizations. Models are fed some inputs and return a result. Data visualizations are web servers backed by popular open-source libraries like Bokeh, Dash, or Streamlit. Models are good for long-running processes and producing archivable results that can be shared and returned to easily. Data visualizations are good for highly interactive and custom user experiences.\nNow that you’ve checked out the developer docs and set up your model or data-viz, you can head over to the C/S publishing page https://compute.studio/new/ to publish your project. Note that this page is still very much under construction and may look different in a few weeks.\n\n\n\nPublish page\n\n\nNext, you will be sent to the second stage in the publish flow where you will provide more details on how to connect your project on C/S:\n\n\n\nConnect Project page\n\n\nClicking “Connect App” will take you to the project home page:\n\n\n\nProject home page\n\n\nGo to the “Settings” button in the top-right corner and this will take you to the project dashboard where you can modify everything from the social preview of your project to the amount of compute resources it needs:\n\n\n\nProject dashboard\n\n\nThe “Builds” link in the sidebar will take you to the builds dashboard where you can create your first build:\n\n\n\nBuild history dashboard\n\n\nIt’s time to create the first build. You can do so by clicking “New Build”. This will take you to the build status page. While the build is being scheduled, the page will look like this:\n\n\n\nBuild scheduled page\n\n\nYou can click the “Build History” link and it will show that the build has been started:\n\n\n\nBuild history dashboard\n\n\nThe build status page should be updated at this point and will look something like this:\n\n\n\nBuild status page\n\n\nC/S automated deployments are built on top of Github Actions. Unfortunately, the logs in Github Actions are not available through the Github API until after the workflow is completely finished. The build status dashboard will update as the build progresses and once it’s done, you will have full access to the logs from the build. These will contain outputs from installing your project and the outputs from your project’s tests.\nIn this case, the build failed. We can inspect the logs to see that an import error caused the failure:\n\n\n\nBuild failed status page\n\n\n\n\n\nBuild failed status page with logs\n\n\nI pushed an update to my fork of Tax-Cruncher on Github and restarted the build by clicking “Failure. Start new Build”. The next build succeeded and we can click “Release” to publish the project:\n\n\n\nBuild status page success\n\n\nThe builds dashboard now shows the two builds:\n\n\n\nUpdated build history page\n\n\nFinally, let’s go run our new model:\n\n\n\nRun project page\n\n\nIt may take a few seconds for the page to load. This is because the model code and all of its dependencies are being loaded onto the C/S servers for the first time:\n\n\n\nRun project page with inputs form\n\n\nThe steps for publishing a data visualization are very similar. The main idea is that you tell C/S what Python file your app lives in and C/S will know how to run it given your data visualization technology choice."
+ },
+ {
+ "objectID": "posts/2020-11-06-introducing-psl-blog.html",
+ "href": "posts/2020-11-06-introducing-psl-blog.html",
+ "title": "Introducing the PSL Blog",
+ "section": "",
+ "text": "Our mission at the Policy Simulation Library is to improve public policy by opening up models and data preparation routines for policy analysis. To support and showcase our diverse community of users and developers, we engage across several mediums: a monthly newsletter, a Q&A forum, (now-virtual) meetups, our Twitter feed, our YouTube channel, documentation for models in our catalog, and of course, issues and pull requests on GitHub.\nToday, we’re adding a new medium: the PSL Blog. We’ll use this space to share major updates on our catalog, provide tutorials, and summarize events or papers that involve our models.\nIf you’d like to share your work on our blog, or to suggest content, drop me a line. To follow along, add the PSL blog’s RSS feed or subscribe to our newsletter.\nHappy reading,\nMax Ghenis\nEditor, PSL Blog"
+ },
+ {
+ "objectID": "posts/2020-11-18-demo-day-creating-reform-files.html",
+ "href": "posts/2020-11-18-demo-day-creating-reform-files.html",
+ "title": "Demo Day: Building policy reform files",
+ "section": "",
+ "text": "Check out the video:\n\nWe will host Demo Days every two weeks until the end of the year. You can see our schedule on our events page.\n\nShow notes:\nI demonstrate how to build policy reform files using the Tax-Brain webapp on Compute Studio. (Useful links below.) This is an introductory lesson that ends with a cliffhanger. We don’t run the model. But we do generate an individual income and payroll tax reform file that is compatible with a range of policy simulation models and analytic tools, some designed for policy decision makers, others for taxpayers and benefits recipients interested in assessing their own circumstances.\nBeyond individual and payroll tax analysis, the reform file can be used with models that assess pass-through and corporate taxation of businesses, as well as a variety of income benefit programs. A wide range of use cases will occupy future events.\nResources:\n\nDemo C/S simulation\nIRS Form 1040\nPSL Catalog\nPSL Events"
+ },
+ {
+ "objectID": "posts/2021-03-08-demo-day-cs-api-stitch.html",
+ "href": "posts/2021-03-08-demo-day-cs-api-stitch.html",
+ "title": "Demo Day: Stitching together apps on Compute Studio",
+ "section": "",
+ "text": "In Demo Day 8, I talked about connecting multiple apps on Compute Studio with PSL Stitch. The source code for PSL stitch can be found in this repository.\nStitch is composed of three components:\n\nA python package that can be run like a normal Python package.\nA RESTful API built with FastAPI that is called remotely to create simulations on Compute Studio.\nA GUI built with ReactJS that makes calls to the REST API to create and monitor simulations.\n\nOne of the cool things about this app is that it uses ParamTools to read the JSON files under the hood. This means that it can read links to data in other Compute Studio runs, files on GitHub, or just plain JSON. Here are some example parameters:\n\npolicy parameters: cs://PSLmodels:Tax-Brain@49779/inputs/adjustment/policy\ntax-cruncher parameters: {\"sage\": [{\"value\": 25}]}\nbusiness tax parameters: {\"CIT_rate\": [{\"value\": 0.25, \"year\": 2021}]}\n\nAfter clicking run, three simulations will be created on Compute Studio and the app will update as soon as the simulations have finished:\n\n\n\nGetting started\n\n\n\n\n\nCS Simulations\n\n\nOnce they are done, the simulations are best viewed and interacted with on Compute Studio, but you can still inspect the JSON response from the Compute Studio API:\n\n\n\nSimulation Complete\n\n\nI created this app to show that it’s possible to build apps on top of the Compute Studio API. I think PSL Stitch is a neat example of how to do this, but I am even more excited to see what others build next.\nAlso, this is an open source project and has lots of room for improvement. If you are interested in learning web technologies related to REST APIs and frontend development with JavaScript, then this project could be a good place to start!\nResources:\n\nPSL Stitch\nSource code\nCompute Studio API Docs"
+ },
+ {
+ "objectID": "posts/2023-12-28-2023-year-in-review.html",
+ "href": "posts/2023-12-28-2023-year-in-review.html",
+ "title": "2023: A year in review",
+ "section": "",
+ "text": "While there haven’t been any blog posts in 2023 :wink:, it has been a productive year for the Policy Simulation Library (PSL) community and PSL Foundation!\nWe’ve continued to serve our mission through education and outreach efforts. We hosted 13 Demo Days in 2023, including presentations from individuals at the Congressional Budget Office, Allegheny County, NOAA, Johns Hopkins, QuantEcon, the City of New York, and other institutions. Archived videos of the Demo Days are available on our YouTube Channel.\nIn addition, we hosted an in person workshop at the National Tax Association’s annual conference in November. This event featured the PolicyEngine-US project and was lead by Max Ghenis and Nikhil Woodruff, co-founders of PolicyEngine. Attendees included individuals from the local area (Denver) and conference attendees, who represented academia, government, and think tanks. Max and Nikhil provided an overview of PolicyEngine and then walked attendees through a hands-on exercise using the PolicyEngine US tool, having them write code to generate custom plots in a Google Colab notebook. It was a lot of fun – and the pizza was decent too!\nSpeaking of PolicyEngine, this fiscally-sponsored project of PSL Foundation had a banner year in terms of fundraising and development. The group received several grants in 2023 and closed out the year with a large grant from Arnold Ventures. They also wrote an NSF grant proposal which they are waiting to hear back from. The group added an experienced nonprofit executive, Leigh Gibson, to their team. Leigh provides support with fundraising and operations, and she’s been instrumental in these efforts. In terms of software development, the PolicyEngine team has been able to greatly leverage volunteers (more than 60!) with Pavel Makarchuk coming on as Policy Modeling Manager to help coordinate these efforts. With their community, PolicyEngine has codified numerous US state tax and benefit policies and has developed a robust method to create synthetic data for use in policy analysis. Be on the lookout for a lot more from them in 2024.\nQuantEcon, another fiscally sponsored project, has also made tremendous contributions to open source economics in 2023. Most importantly, they ran a very successful summer school in West Africa. In addition, they have continued make key contributions to software tools useful for teaching and training economics tools. These include Jupyteach, which Spencer Lyon shared in our Demo Day series. With their online materials, textbooks, and workshops around the world, QuantEcon is shaping how researchers and policy analysts employ economic tools to solve real-world problems.\nPSL Foundation added a third fiscally sponsored project, Policy Change Index (PCI) in 2023. PCI was founded by Weifeng Zhong, a Senior Research Fellow at the Mercatus Center at George Mason University, and uses natural language processing and machine learning to predict changes in policy among autocratic regimes. PCI has had a very successful start with PCI-China, predicting policy changes in China, and PCI-Outbreak, predicting the extent of true COVID-19 case counts in China during the pandemic. Currently, they are extending their work to include predictive indices for Russia, North Korea, and Iran. PSL-F is excited for the opportunity to help support this important work.\nOther cataloged projects have continued to be widely used in 2023. To note a few of these use cases, the United Nations has partnered with Richard Evans and Jason DeBacker, maintainers of OG-Core, to help bring the modeling platform to developing countries they are assisting. Tax Foundation’s Capital Cost Recovery model has been updated to 2023 and used in their widely cited 2023 Tax Competitiveness Index. And the Tax-Calculator and TaxData projects both continue to used by think tanks and researchers.\nAs 2023 comes to a close, we look forward to 2024. We’ll be launching a new PSLmodels.org website soon. And there’ll be many more events – we hope you join in.\nFrom all of us at the PSL, best wishes for a happy and healthy New Year!\nResources:\n\nPSL Models\nPSL Foundation\nPSL Twitter Feed\nPSL YouTube channel\nPSL on Open Collective\nPSL Shop for PSL branded merchandise"
+ },
+ {
+ "objectID": "posts/2022-04-12-demo-day-policyengine-us.html",
+ "href": "posts/2022-04-12-demo-day-policyengine-us.html",
+ "title": "Demo Day: Modeling taxes and benefits with the PolicyEngine US web app",
+ "section": "",
+ "text": "PolicyEngine is a nonprofit that builds free, open-source software to compute the impact of public policy. After launching our UK app in October 2021, we’ve just launched our US app, which calculates households’ federal taxes and several benefit programs, both under current law and under customizable policy reforms.\nIn this Demo Day, I provide background on PolicyEngine and demonstrate how to use PolicyEngine US (a Policy Simulation Library cataloged model) to answer a novel policy question:\n\nHow would doubling both (a) the Child Tax Credit and (b) the Supplemental Nutrition Assistance Program (SNAP) net income limit affect a single parent in California with $1,000 monthly rent and $50 monthly broadband costs?\n\nBy bringing together tax and benefit models into a web interface, we can answer this question quickly without programming experience, as well as an unlimited array of questions like it. The result is a table breaking down the household’s net income by program, as well as graphs of net income and marginal tax rates as the household’s earnings vary.\nI close with a quick demo of PolicyEngine UK, which adds society-wide results like the impact of reforms on the budget, poverty, and inequality, as well as contributed policy parameters. We’re planning to bring those features to PolicyEngine US, along with state tax and benefit programs in all 50 states, over the next two years (if not sooner).\nFeel free to explore the app and reach out with any questions at max@policyengine.org.\nResources:\n\nPolicyEngine US\nPresentation slides\nPolicyEngine blog post on launching PolicyEngine US"
+ },
+ {
+ "objectID": "posts/2022-04-18-demo-day-ccc-international.html",
+ "href": "posts/2022-04-18-demo-day-ccc-international.html",
+ "title": "Demo Day: Analyzing tax competitiveness with Cost-of-Capital-Calculator",
+ "section": "",
+ "text": "In the Demo Day video shared here, I show how to use open source tools to analyze international corporate tax competitiveness. The two main tools illustrated are the Cost-of-Capital-Calculator (CCC), a model to compute measures of the tax burden on new investments, and Tax Foundation’s International Tax Competitiveness Index (ITCI).\nTax Foundation has made many helpful resources available online. Their measures of international business tax policy are a great example of this. The ICTI outputs and inputs are all well documented, with source code to reproduce results available on GitHub.\nI plug Tax Foundation’s country-by-country data into CCC functions using it’s Python API. Because CCC is designed to flexibly take array or scalar data, operating on rows of tabular data, such as that in the ITCI, is relatively straight-forward. The Google Colab notebook I walk through in this Demo Day, can be a helpful example to follow if you’d like to do something similar to this with the Tax Foundation data - or your own data source. From the basic building blocks there (reading in data, calling CCC functions), you can extend the analysis in a number of ways. For example adding additional years of data (Tax Foundation posts their data back to 2014), modifying economic assumptions, or creating counter-factual policy experiments across sets of countries.\nIf you find this example useful, or have questions or suggestions about this type of analysis, please feel free to reach out to me.\nResources:\n\nColab Notebook\nTax Foundation International Tax Competitiveness Index 2021\nGitHub repo for Tax Foundation ITCI data\nCost-of-Capital-Calculator documentation"
+ },
+ {
+ "objectID": "posts/2021-06-14-demo-day-tax-brain-python-api.html",
+ "href": "posts/2021-06-14-demo-day-tax-brain-python-api.html",
+ "title": "Demo Day: Using the TaxBrain Python API",
+ "section": "",
+ "text": "The TaxBrain project was primarily created to serve as the backend of the Tax-Brain web-application. But at its core, TaxBrain is a Python package that greatly simplifies tax policy analysis. For this PSL Demo-Day, I demonstrated TaxBrain’s capabilities as a standalone package, and how to use it to produce high-level summaries of the revenue impacts of proposed tax policies. The Jupyter Notebook from the presentation can be found here.\nTaxBrain’s Python API allows you to run a full analysis of income tax policies in just three lines of code:\nfrom taxbrain import TaxBrain\n\ntb = TaxBrain(START_YEAR, END_YEAR, use_cps=True, reform=REFORM_POLICY)\ntb.run()\nWhere START_YEAR and END_YEAR are the first and last years, respectively, of the analysis; use_cps is a boolean indicator that you want to use the CPS-based microdata file prepared for use with Tax-Calculator; and REFORM_POLICY is either a JSON file or Python dictionary that specifies a reform suitable for Tax-Calculator. The forthcoming release of TaxBrain will also include a feature that allows you to perform a stacked revenue analysis as well. The inspiration for this feature was presented by Jason DeBacker in a previous demo-day.\nOnce TaxBrain has been run, there are a number of methods and functions included in the package to create tables and plots to summarize the results. I used the Biden 2020 campaign proposal in the demo and the resulting figures are below. The first is a “volcano plot” that makes it easy to see the magnitude of the change in tax liability individuals across the income distribution face. Each dot represents a tax unit, and the x and y variables can be customized based on the user’s needs.\n\nThe second gives a higher-level look at how taxes change in each income bin. It breaks down what percentage of each income bin faces a tax increase or decrease, and the size of that change.\n\nThe final plot shown in the demo simply shows tax liabilities by year over the budget window.\n\nThe last feature I showed was TaxBrain’s automated reports. TaxBrain uses saved results and an included report template to write a report summarizing the findings of your simulation. The reports include tables and figures similar to what you may find in similar write ups by the Joint Committee on Taxation or Tax Policy Center including a summary of significant changes caused by the reform, and all you need is one line of code:\nreport(tb, name='Biden Proposal', outdir='biden', author='Anderson Frailey')\nThe above code will save a PDF copy of the report in a directory called biden along with PNG files for each of the graphs created and the raw Markdown text used for the report, which you can then edit as needed if you would like to add content to the report that is not already included. Screenshots of the default report are included below.\n \nThere are of course downsides to using TaxBrain instead of Tax-Calculator directly. Specifically, it’s more difficult, and sometimes impossible, to perform custom tasks like modeling a feature of the tax code that hasn’t been added to Tax-Calculator yet or advanced work with marginal tax rates. But for day-to-day tax modeling, the TaxBrain Python package can significantly simply any workflow.\nResources:\n\nTax-Brain GitHub repo\nTax-Brain Documentation"
+ },
+ {
+ "objectID": "posts/2021-03-02-demo-day-taxbrain-to-taxcruncher.html",
+ "href": "posts/2021-03-02-demo-day-taxbrain-to-taxcruncher.html",
+ "title": "Demo Day: Moving policy reform files from Tax-Brain to Tax-Cruncher",
+ "section": "",
+ "text": "Check out the video:\n\n\nShow notes:\nI demonstrate how to move a policy reform file from Tax-Brain to Tax-Cruncher using the Compute.Studio API. See the Demo C/S simulation linked below for text instructions that accompany the video.\nResources:\n\nDemo C/S simulation with instructions"
+ },
+ {
+ "objectID": "posts/2021-12-28-2021-year-in-review.html",
+ "href": "posts/2021-12-28-2021-year-in-review.html",
+ "title": "2021: A year in review",
+ "section": "",
+ "text": "As 2021 winds down, I wanted to take a few minutes to reflect on the Policy Simulation Library’s efforts over the past year. With an amazing community of contributors, supporters, and users, PSL has been able to make a real impact in 2021.\nThe library saw two new projects achieve “cataloged” status: Tax Foundation’s Capital Cost Recovery model and the Federal Reserve Bank of New York’s DSGE.jl model. Both models satisfy all the the PSL criteria for transparency and reproducibility. Both are also written entirely in open source software: the Capital Cost Recovery model is in R and the DSGE model in Julia.\nAn exciting new project to join the Library this year is PolicyEngine. PolicyEngine is building open source tax and benefit mircosimulation models and very user-friendly interfaces to these models. The goal of this project is to take policy analysis to the masses through intuitive web and mobile interfaces for policy models. The UK version of the PolicyEngine app has already seen use from politicians interested in reforming the tax and benefit system in the UK.\nAnother excellent new addition to the library is the Federal-State Tax Project. This project provides data imputation tools to allow for state tax data that are representative of each state as well as federal totals. These datasets can then be used in microsimulation models, such as Tax-Calculator to study the impact of federal tax laws across the states. Matt Jensen and Don Boyd have published several pieces with these tools, including in State Tax Notes\nPSL Foundation became an official business entity in 2021. While still awaiting a letter of determination for 501(c)(3) status from the IRS, PSL Foundation was able to raise more than $25,000 in the last few months of 2021 to support open source policy analysis!\nPSL community members continued to interact several times each week in our public calls. The PSL Shop was launched in 2021 so that anyone can get themselves some PSL swag (with some of each purchase going back to the PSL Foundation to support the Library). In addition, PSL hosted 20 Demo Day presentations from 11 different presenters! These short talks covered everything from new projects to interesting applications of some of the first projects to join the Library, as well as general open source tools.\nAs in past years, PSL cataloged and incubating models were found to be of great use in current policy debates. Whether it was the ARPA, Biden administration proposals to expand the CTC, or California’s Basic Income Bill, the accessibility and ability to reproduce results from these open source projects has made them a boon to policy analysts.\nWe are looking forward to a great 2022! We expect the Library to continue to grow, foresee many interesting and helpful Demo Days, and are planning a DC PSL Workshop for March 2022. We hope to see you around these or other events!\nBest wishes from PSL for a happy and healthy New Year!\nResources:\n\nPSL Foundation\nPSL Twitter Feed\nPSL YouTube channel\nPSL on Open Collective"
+ },
+ {
+ "objectID": "posts/2022-06-28-demo-day-github.html",
+ "href": "posts/2022-06-28-demo-day-github.html",
+ "title": "Demo Day: Getting Started with GitHub",
+ "section": "",
+ "text": "Git and GitHub often present themselves as barriers to entry to would-be contributors to PSL projects, even for those who are otherwise experienced with policy modeling. But these tools are critical to collaboration on open source projects. In the Demo Day video linked above, I cover some of the basics to get set up and begin contributing to an open source project.\nThere are four steps I outline:\n\nCreate a “fork” of the repository you are interested in. A fork is a copy of the source code that resides on GitHub (i.e., in the cloud). A fork gives you control over a copy of the source code. You will be able to merge in changes to the code on this fork, even if you don’t have permissions to do so with the original repository.\n“Clone” the fork. Cloning will download a copy of the source code from your fork onto your local machine. But cloning is more than just downloading the source code. It will include the version history of the code and automatically create a link between the local files and the remote files on your fork.\nConfigure your local files to talk to both your fork (which has a default name of origin) and the original repository you forked from (which typically has the default name of upstream). Do this by using your command prompt or terminal to navigate to the directory you just cloned. From there, run:\n\ngit remote add upstream URL_to_original_repo.git\nAnd check that this worked by giving the command:\ngit remote -v\nIf things worked, you should see URLs to your fork and the upstream repository with “(fetch)” and “(push)” by them More info on this is in the Git docs.\n\nNow that you have copies of the source code on your fork and on your local machine, you are ready to begin contributing. As you make changes to the source code, you’ll want to work on development branches. Branches are copies of the code. Ideally, you keep your “main” (or “master”) branch clean (i.e., your best version of the code) and develop the code on branches. When you’ve completed the development work (e.g., adding a new feature) you will them merge this into the “main” branch.\n\nI hope this helps you get started contributing to open source projects. Git and GitHub are valuable tools and there is lots more to learn, but these basics will get you going. For more information, see the links below. If you want to get started working with a project in the Library, feel free to reach out to me through the relevant repo (@jdebacker on GitHub) or drop into a PSL Community Call (dates on the PSL Calendar).\nResources:\n\nPSL Git Tutorial\nGit Basics"
+ },
+ {
+ "objectID": "posts/2022-07-14-demo-day-cambridge-cash-assistance.html",
+ "href": "posts/2022-07-14-demo-day-cambridge-cash-assistance.html",
+ "title": "Demo Day: How does targeted cash assistance affect incentives to work?",
+ "section": "",
+ "text": "In this week’s Demo Day, I shared my paper published at the Center for Growth and Opportunity in June. “How does targeted cash assistance affect incentives to work?” analyzed a program Mayor Sumbul Siddiqui proposed in Cambridge, Massachusetts to provide $500 per month for 18 months to all families with dependents and income below 200% of the poverty line.\nTargeted programs like these are common in guaranteed income pilots, and in some enacted policies, and I find that it would cost-effectively reduce poverty: if expanded to Massachusetts, it would cost $1.2 billion per year and cut child poverty 42%.\nHowever, that targeting comes at a cost. Using the OpenFisca US microsimulation model (supported by the Center for Growth and Opportunity and cataloged by the Policy Simulation Library), I find that the program would deepen an existing welfare cliff at 200% of the poverty line. For example, a family of four would lose over $19,000 total—$9,000 from the cash assistance and $10,000 from other benefits—once they earn a dollar above 200% of the poverty line (about $55,000). To recover those lost benefits, they would have to earn an additional $26,000, a range I call the “earnings dead zone”.\nMy presentation reviews these trends in both slides and the PolicyEngine US app for computing the impacts of tax and benefit policy. For example, I show how repealing the SNAP emergency allotment would smooth out welfare cliffs, while reducing resources available to low-income families, and how a universal child allowance avoids work disincentives while less cost-effectively reducing poverty.\nPolicymakers face trade-offs between equity and efficiency, and typically labor supply responses consider marginal tax rates. With their infinite marginal tax rates, welfare cliffs are a less explored area, even though they surface in several parts of the tax and benefit system. This paper makes a start, but more research is yet to be done."
+ },
+ {
+ "objectID": "posts/2022-12-31-2022-year-in-review.html",
+ "href": "posts/2022-12-31-2022-year-in-review.html",
+ "title": "2022: A year in review",
+ "section": "",
+ "text": "This has been another successful year for the Policy Simulation Library, whose great community of contributors continue to make innovative advances in open source policy analysis, and for the PSL Foundation, which supports the Library and its community. We are so thankful for all those who have made financial or technical contributions to the PSL this year! In this blog post, I want to take this time at the end of the year to reflect on a few of the highlights from 2022.\nPolicyEngine, a PSL Foundation fiscally-sponsored project, launched PolicyEngine US in April and has since seen many use cases of the model (check out the PolicyEngine year-in-review here). PolicyEngine had begun by leveraging the OpenFisca platform, but has transitioned to their own-maintained PolicyEngine Core. PolicyEngine Core and their related projects (such as PolicyEngine US and PolicyEngine UK) already meet all the criteria set forth by the Policy Simulation Library. Keep an eye out for lots more excellent tax and benefit policy analysis tools from PolicyEngine in 2023 and beyond!\nPSL Foundation has partnered with QuantEcon, acting as a fiscal sponsor for their projects that provide training and training materials for economic modeling and econometrics using open source tools. QuantEcon ran a massive open online class in India that had more than 1000 registrants in summer of 2022. They also ran an online course for over 100 students from universities in Africa in 2022. Further, with the funding received through their partnership with PSL Foundation, QuantEcon will continue these efforts in 2023 with a planned, in-person course in India.\nPSL hosted its first in-person workshop in March. The workshop focused on open source tools for tax policy analysis including Tax-Calculator, Cost-of-Capital-Calculator, OG-USA, and PolicyEngine US. The PSL event was, appropriately enough, hosted at the MLK Memorial Library in DC. We filled the space with 30 attendees from think tanks, consultancies, and government agencies. The workshop was a great success and we look forward to hosting more in-person workshops in the future.\nPSL’s bi-weekly Demo Day series continued throughout 2022, with 13 Demo Days this year. In these, we saw a wide array of presenters from institutions such as the Federal Reserve Bank of Atlanta, PolicyEngine, Tax Foundation, National Center for Children in Poverty, IZA Institute of Labor Economics, Channels, the University of South Carolina, the Center for Growth and Opportunity, and the American Enterprise Institute. You can go back and rewatch any of these presentations on YouTube.\nIt’s been a fantastic year and we expect even more from the community and PSL Foundation in 2023. PSL community members continue to interact several times each week on our public calls. Check out the events page and join us in the New Year!\nFrom all of us at the PSL, best wishes for a happy and healthy New Year!\nResources:\n\nPSL Foundation\nPSL Twitter Feed\nPSL YouTube channel\nPSL on Open Collective\nPSL Shop for PSL branded merchandise"
+ },
+ {
+ "objectID": "about.html",
+ "href": "about.html",
+ "title": "PSL Blog",
+ "section": "",
+ "text": "Updates from the Policy Simulation Library."
+ },
+ {
+ "objectID": "index.html",
+ "href": "index.html",
+ "title": "Policy Simulation Library Blog",
+ "section": "",
+ "text": "2023: A year in review\n\n\n\n\n\n\npsl\n\n\npsl-foundation\n\n\n\nHighlights from the Policy Simulation Library in 2023.\n\n\n\n\n\nDec 28, 2023\n\n\nJason DeBacker\n\n\n\n\n\n\n\n\n\n\n\n\n2022: A year in review\n\n\n\n\n\n\npsl\n\n\npsl-foundation\n\n\n\nHighlights from the Policy Simulation Library in 2022.\n\n\n\n\n\nDec 31, 2022\n\n\nJason DeBacker\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: How does targeted cash assistance affect incentives to work?\n\n\n\n\n\n\ndemo-days\n\n\nbenefits\n\n\nus\n\n\n\nHow a proposed program in Cambridge, Massachusetts would affect poverty and incentives.\n\n\n\n\n\nJul 14, 2022\n\n\nMax Ghenis\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Getting Started with GitHub\n\n\n\n\n\n\ndemo-days\n\n\ngithub\n\n\ngit\n\n\nworkflow\n\n\ngetting-started\n\n\n\nThe basics of forking and cloning repositories and working on branches.\n\n\n\n\n\nJun 28, 2022\n\n\nJason DeBacker\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Analyzing tax competitiveness with Cost-of-Capital-Calculator\n\n\n\n\n\n\ndemo-days\n\n\ncost-of-capital-calculator\n\n\nbusiness-taxation\n\n\ncorporate-income-tax\n\n\ntaxes\n\n\n\nUsing Cost-of-Capital-Calculator with data on international business tax policies.\n\n\n\n\n\nApr 18, 2022\n\n\nJason DeBacker\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Modeling taxes and benefits with the PolicyEngine US web app\n\n\n\n\n\n\ndemo-days\n\n\napps\n\n\ntaxes\n\n\nbenefits\n\n\nus\n\n\n\nPolicyEngine US is a new web app for computing the impact of US tax and benefit policy.\n\n\n\n\n\nApr 12, 2022\n\n\nMax Ghenis\n\n\n\n\n\n\n\n\n\n\n\n\nPolicy Simulation Library DC Workshop: Open source tools for analyzing tax policy\n\n\n\n\n\n\nPSL\n\n\nPSL-Foundation\n\n\nWorkshop\n\n\n\nWashington, DC open-source modeling workshop, March 25, 2022, 8:30am-1:00pm, Martin Luther King, Jr. Memorial Library.\n\n\n\n\n\nMar 3, 2022\n\n\nJason DeBacker\n\n\n\n\n\n\n\n\n\n\n\n\n2021: A year in review\n\n\n\n\n\n\npsl\n\n\npsl-foundation\n\n\n\nHighlights from the Policy Simulation Library in 2021.\n\n\n\n\n\nDec 28, 2021\n\n\nJason DeBacker\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Using synthimpute for data fusion\n\n\n\n\n\n\ndemo-days\n\n\npython\n\n\ndata-fusion\n\n\nsynthimpute\n\n\n\nThe synthimpute Python package fuses and synthesizes economic datasets with machine learning.\n\n\n\n\n\nDec 8, 2021\n\n\nMax Ghenis\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: The OG-Core platform\n\n\n\n\n\n\ndemo-days\n\n\npython\n\n\nmacroeconomics\n\n\noverlapping-generations\n\n\n\nA Python platform for building country-specific overlapping generations general equilibrium models.\n\n\n\n\n\nNov 1, 2021\n\n\nJason DeBacker\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Deploying apps on Compute Studio\n\n\n\n\n\n\ndemo-days\n\n\npolicy-simulation-library\n\n\ncompute-studio\n\n\n\nHow to deploy apps on Compute Studio using the new automated deployments feature.\n\n\n\n\n\nSep 20, 2021\n\n\nHank Doupe\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Unit testing for open source projects\n\n\n\n\n\n\ndemo-days\n\n\nR\n\n\nPython\n\n\nunit-testing\n\n\n\nHow to ensure that individual functions do what you expect.\n\n\n\n\n\nAug 9, 2021\n\n\nJason DeBacker\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Constructing tax data for the 50 states\n\n\n\n\n\n\ndemo-days\n\n\ntax\n\n\ndata\n\n\nus\n\n\n\nA new dataset to facilitate state-level analysis of federal tax reforms.\n\n\n\n\n\nJul 16, 2021\n\n\nDon Boyd\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Using the TaxBrain Python API\n\n\n\n\n\n\ndemo-days\n\n\nindividual-income-tax\n\n\ntax-brain\n\n\ntax-calculator\n\n\n\nA programmatic interface to compute the impact of tax reform.\n\n\n\n\n\nJun 14, 2021\n\n\nAnderson Frailey\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Updating Jupyter Book documentation with GitHub Actions\n\n\n\n\n\n\ndemo-days\n\n\njupyter-book\n\n\nGH-actions\n\n\ndocumentation\n\n\n\nHow to keep interactive programmatic notebook-based documentation up-to-date in your pull request workflow.\n\n\n\n\n\nMay 17, 2021\n\n\nJason DeBacker\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Producing stacked revenue estimates with the Tax-Calculator Python API\n\n\n\n\n\n\ndemo-days\n\n\nindividual-income-tax\n\n\ntax-brain\n\n\ntax-calculator\n\n\n\nHow to evaluate the cumulative effects of a multi-part tax reform.\n\n\n\n\n\nApr 5, 2021\n\n\nJason DeBacker\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Stitching together apps on Compute Studio\n\n\n\n\n\n\ndemo-days\n\n\nPSL\n\n\n\nCreating an app with the Compute Studio API.\n\n\n\n\n\nMar 8, 2021\n\n\nHank Doupe\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Moving policy reform files from Tax-Brain to Tax-Cruncher\n\n\n\n\n\n\ndemo-days\n\n\n\nHow to move reforms between a tax-unit-level and society-wide model with the Compute.Studio API.\n\n\n\n\n\nMar 2, 2021\n\n\nMatt Jensen\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Contributing to PSL projects\n\n\n\n\n\n\ndemo-days\n\n\nPSL\n\n\ngit\n\n\ngithub\n\n\n\nHow to help software projects in the Policy Simulation Library.\n\n\n\n\n\nMar 2, 2021\n\n\nJason DeBacker\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Running the scf and microdf Python packages in Google Colab\n\n\n\n\n\n\ndemo-days\n\n\nmicrodf\n\n\nscf\n\n\n\nAnalyzing US wealth data in a web-based Python notebook.\n\n\n\n\n\nJan 29, 2021\n\n\nMax Ghenis\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: The OG-USA macroeconomic model of U.S. fiscal policy\n\n\n\n\n\n\ndemo-days\n\n\nOG-USA\n\n\nTax-Calculator\n\n\nopen-source\n\n\npolicy-simulation-library\n\n\ncompute-studio\n\n\nus\n\n\n\nHow to model the macroeconomic effects of tax reform with a web app.\n\n\n\n\n\nJan 28, 2021\n\n\nRichard W. Evans\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Tax-Brain\n\n\n\n\n\n\ndemo-days\n\n\nindividual-income-tax\n\n\ntax-brain\n\n\nus\n\n\n\nComputing the impact of US tax reform with the Tax-Brain web-app.\n\n\n\n\n\nDec 23, 2020\n\n\nAnderson Frailey\n\n\n\n\n\n\n\n\n\n\n\n\n2020: A year in review\n\n\n\n\n\n\npsl\n\n\npsl-foundation\n\n\n\nHighlights from the Policy Simulation Library in 2020.\n\n\n\n\n\nDec 23, 2020\n\n\nJason DeBacker\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Cost-of-Capital-Calculator Web Application\n\n\n\n\n\n\ndemo-days\n\n\ncost-of-capital-calculator\n\n\nbusiness-taxation\n\n\ncorporate-income-tax\n\n\n\nComputing the impact of taxes on business investment incentives under alternative policy scenarios.\n\n\n\n\n\nDec 3, 2020\n\n\nJason DeBacker\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Tax-Cruncher\n\n\n\n\n\n\ndemo-days\n\n\nindividual-income-tax\n\n\ntax-cruncher\n\n\n\nHow to calculate a taxpayer’s liabilities under current law and under a policy reform.\n\n\n\n\n\nNov 23, 2020\n\n\nPeter Metz\n\n\n\n\n\n\n\n\n\n\n\n\nDemo Day: Building policy reform files\n\n\n\n\n\n\ndemo-days\n\n\n\nThe first in Policy Simulation Library’s new live demo series describes specifying tax reforms.\n\n\n\n\n\nNov 18, 2020\n\n\nMatt Jensen\n\n\n\n\n\n\n\n\n\n\n\n\nIntroducing the PSL Blog\n\n\n\n\n\n\nannouncements\n\n\n\nA new way to follow models in the Policy Simulation Library catalog.\n\n\n\n\n\nNov 6, 2020\n\n\nMax Ghenis\n\n\n\n\n\n\nNo matching items"
+ },
+ {
+ "objectID": "posts/2022-03-03-DC-workshop.html",
+ "href": "posts/2022-03-03-DC-workshop.html",
+ "title": "Policy Simulation Library DC Workshop: Open source tools for analyzing tax policy",
+ "section": "",
+ "text": "The Policy Simulation Library is hosting a workshop in Washington, DC on March 25 on open source tools for the analysis of tax policy. Participants will learn how to use open source models from the Library for revenue estimation, distributional analysis, and to simulate economic impacts of tax policy. The workshop is intended to be a hands-on experience and participants can expect to leave with the required software, documentation, and knowledge to continue using these tools. All models in the workshop are written in the Python programming language–familiarity with the language is helpful, but not required.\nWorkshop Schedule:\n\n8:15-8:45a: Breakfast\n8:45-9:00a: Introduction\n9:00-9:50a: Using Tax-Calculator for revenue estimation and distributional analysis (Matt Jensen)\n10:00-10:50a: Estimating effective tax rates on investment with Cost-of-Capital-Calculator (Jason DeBacker)\n11:00-11:50a: Macroeconomic modeling of fiscal policy with OG-Core and OG-USA (Richard W. Evans)\nnoon-1:00p: Lunch and demonstration of PolicyEngine (Max Ghenis)\n\nThe workshop will be held at the Martin Luther King Jr. Memorial Library in Washington, DC. Participants are expected to arrive by 8:30am and the program will conclude at 1:00pm. Breakfast and lunch will be provided. PSL Foundation is sponsoring the event and there is no cost to attend. Attendance is limited to 30 in order to make this a dynamic and interactive workshop.\nTo register, please use this Google Form. Registration will close March 11. Participants will be expected to bring a laptop to the workshop where they can interact with the software in real time with the instructors. Registered participants will receive an email before the event with a list of software to install before the workshop.\nPlease feel free to share this invitation with your colleagues.\nQuestions about the workshop can be directed to Jason DeBacker at jason.debacker@gmail.com."
+ },
+ {
+ "objectID": "posts/2020-12-23-demo-day-tax-brain.html",
+ "href": "posts/2020-12-23-demo-day-tax-brain.html",
+ "title": "Demo Day: Tax-Brain",
+ "section": "",
+ "text": "For this PSL demo-day I showed how to use the Tax-Brain web-application, hosted on Compute Studio, to analyze proposed individual income tax policies. Tax-Brain integrates the Tax-Calculator and Behavioral-Responses models to make running both static and dynamic analyses of the US federal income and payroll taxes simple. The web interface for the model makes it possible for anyone to run their own analyses without writing a single line of code.\nWe started the demo by simply walking through the interface and features of the web-app before creating our own sample reform to model. This reform, which to my knowledge does not reflect any proposals currently up for debate, included changes to the income and payroll tax rates, bringing back personal exemptions, modifying the standard deduction, and implementing a universal basic income.\nWhile the model ran, I explained how Tax-Brain validated all of the user inputs, the data behind the model, and how the final tax liability projections are determined. We concluded by looking through the variety of tables and graphs Tax-Brain produces and how they can easily be shared with others.\nResources:\n\nSimulation from the demonstration\nTax-Brain GitHub repo\nTax-Calculator documentation\nBehavioral-Responses documentation"
+ },
+ {
+ "objectID": "posts/2021-11-01-demo-day-og-core.html",
+ "href": "posts/2021-11-01-demo-day-og-core.html",
+ "title": "Demo Day: The OG-Core platform",
+ "section": "",
+ "text": "The OG-Core model is a general equilibrium, overlapping generations (OG) model suitable for evaluating fiscal policy. Since the work of Alan Auerbach and Laurence Kotlikoff in the 1980s, this class of model has become a standard in the macroeconomic analysis of tax and spending policy. This is for good reason. OG models are able to capture the impacts of taxes and spending in the short and long run, examine incidence of policy across generations of people (not just short run or steady state analysis of a cross-section of the economy), and capture important economic dynamics (e.g., crowding out effects of deficit-financed policy).\nIn the PSL Demo Day presentation linked above, I cover the basics of OG-Core: its history, its API, and how country-specific models can use OG-Core as a dependency. In brief, OG-Core provides a general overlapping generations framework, from which parameters can be calibrated to represent particular economies. Think of it this way: an economic model is just a set of parameters plus a system of equations. OG-Core spells out all of the equations to represent an economy with heterogeneous agents, production and government sectors, open economy options, and detailed policy rules. OG-Core also includes default values for all parameters, along with parameter metadata and parameter validation rules. A country specific application is then just a particular parameterization of the general OG-Core model.\nAs an example of a country-specific application, one can look at the OG-USA model. This model provides a calibration of OG-Core to the United States. The source code in that project allows one to go from raw data sources to the estimation and calibration procedures used to determine parameter values representing the United States, to parameter values in formats suitable for use in OG-Core. Country-specific models like OG-USA include (where available) links to microsimulation models of tax and spending programs to allow detailed microdata of actual and counterfactual policies to inform the net tax-transfer functions used in the OG-Core model. For those interested in building their own country-specific model, the OG-USA project provides a good example to work from.\nWe encourage you to take a look at OG-Core and related projects. New contributions and applications are always welcome. If you have questions or comments, reach out through the relevant repositories on Github to me, @jdebacker, or Rick Evans, @rickecon.\nResources:\n\nOG-Core documentation\nOG-USA documentation package for unit testing in Python\nTax-Calculator documentation\nOG-UK repository\nOpenFisca-UK repository\nSlides from the Demo Day presentation"
+ },
+ {
+ "objectID": "posts/2020-12-03-demo-day-cost-of-capital-calculator.html",
+ "href": "posts/2020-12-03-demo-day-cost-of-capital-calculator.html",
+ "title": "Demo Day: Cost-of-Capital-Calculator Web Application",
+ "section": "",
+ "text": "In the PSL Demo Day video linked above, I demonstrate how to use the Cost-of-Capital-Calculator (CCC) web application on Compute-Studio. CCC computes various measures of the impact of the tax system on business investment. These include the Hall-Jorgenson cost of capital, marginal effective tax rates, and effective average tax rates (following the methodology of Devereux and Griffith (1999)).\nI begin by illustrating the various parameters available for the user to manipulate. These include parameters of the business and individual income tax systems, as well as parameters representing economic assumptions (e.g., inflation rates and nominal interest rates) and parameters dictating financial and accounting policy (e.g., the fraction of financing using debt). Note that all default values for tax policy parameters represent the “baseline policy”, which is defined as the current law policy in the year being analyzed (which itself is a parameter the user can change). Other parameters are estimated using historical data following the methodology of CBO (2014).\nNext, I change a few parameters and run the model. In this example, I move the corporate income tax rate up to 28% and lower bonus depreciation for assets with depreciable lives of 20 years or less to 50%.\nFinally, I discuss how to interpret output. The web app returns a table and three figures summarizing marginal effective total tax rates on new investments. This selection of output helps give one a sense of the the overall changes, as well as effects across asset types, industries, and type of financing. For the full model output, one can click on “Download Results”. Doing so will download four CSV files contain several measures of the impact of the tax system on investment for very fine asset and industry categories. Users can take these files and create tables and visualizations relevant to their own use case.\nPlease take the model for a spin and simulate your own reform. If you have questions, comments, or suggestions, please let me know on the PSL Discourse (non-technical questions) or by opening an issue in the CCC GitHub repository (technical questions).\nResources:\n\nCompute Studio simulation used in the demonstration\nCost-of-Capital-Calculator web app\nCost-of-Capital-Calculator documentation\nCost-of-Capital-Calculator GitHub repository"
+ },
+ {
+ "objectID": "posts/2020-11-23-demo-day-tax-cruncher.html",
+ "href": "posts/2020-11-23-demo-day-tax-cruncher.html",
+ "title": "Demo Day: Tax-Cruncher",
+ "section": "",
+ "text": "For the Demo Day on November 16, I showed how to calculate a taxpayer’s liabilities under current law and under a policy reform with Tax-Cruncher. The Tax-Cruncher web application takes two sets of inputs: a taxpayer’s demographic and financial information and the provisions of a tax reform.\nFor the first Demo Day example (3:50), we looked at how eliminating the state and local tax (SALT) deduction cap and applying payroll tax to earnings above $400,000 would affect a high earner. In particular, our hypothetical filer had $500,000 in wages, $100,000 in capital gains, and $100,000 in itemizable expenses. You can see the results at Compute Studio simulation #634.\nFor the second example (17:50), we looked at how expanding the Earned Income Tax Credit (EITC) and Child Tax Credit would impact a family with $45,000 in wages and two young children. You can see the results at Compute Studio simulation #636.\nResources:\n\nTax-Cruncher\nTax-Cruncher-Biden"
+ },
+ {
+ "objectID": "posts/2021-01-28-demo-day-how-to-use-og-usa.html",
+ "href": "posts/2021-01-28-demo-day-how-to-use-og-usa.html",
+ "title": "Demo Day: The OG-USA macroeconomic model of U.S. fiscal policy",
+ "section": "",
+ "text": "In this PSL Demo Day, I demonstrate how to use the open source OG-USA macroeconomic model of U.S. fiscal policy. Jason DeBacker and I (Richard Evans) have been the core maintainers of this project and repository for the last six years. This demo is organized into the following sections. The YouTube webinar linked above took place on January 11, 2021."
+ },
+ {
+ "objectID": "posts/2021-01-28-demo-day-how-to-use-og-usa.html#brief-note-about-the-value-of-the-psl-community",
+ "href": "posts/2021-01-28-demo-day-how-to-use-og-usa.html#brief-note-about-the-value-of-the-psl-community",
+ "title": "Demo Day: The OG-USA macroeconomic model of U.S. fiscal policy",
+ "section": "Brief note about the value of the PSL community",
+ "text": "Brief note about the value of the PSL community\nThe Policy Simulation Library is a decentralized organization of open source policy models. The Policy Simulation Library GitHub organization houses many open source repositories, each of which represents a curated policy project by a diverse group of maintainers. The projects that have met the highest standards of best practices and documentation are designated as psl-cataloged , while newer projects that are in earlier stages are designated as psl-incubating . The philosophy and goal of the PSL environment is to make policy modeling open and transparent. It also allows more collaboration and cross-project contributions and interactions.\nThe Policy Simulation Library group has been holding these PSL Demo Day webinars since the end of 2020. The video of each webinar is available on the Policy Simulation Library YouTube channel. These videos are a great resource for learning the different models available in the PSL community, how the models interact, how to contribute to them, and what is on the horizon in their development. Also excellent in many of the PSL Demo Day webinars is a demonstration of how to use the models on the Compute Studio web application platform.\nI have been a participant in and contributor to the PSL community since its inception. I love economic policy modeling. And I learned how sophisticated and complicated economic policy models can be. And any simulation can have hundreds of underlying assumptions, some of which may not be explicitly transparent. I think models that are used for public policy analysis have a philosophical imperative to be open source. This allows others to verify results and test sensitivity to assumptions.\nAnother strong benefit of open source modeling is that it is fundamentally apolitical. With proprietary closed-source policy models, an outside observer might criticize the results of the model based on the perceived political biases of the modeler or the sponsoring organization. With open-source models, a critic can be redirected to the underlying assumptions, structure, and content of the model. This is constructive criticism and debate that moves the science foreward. In the current polarized political environment in the U.S., open-source modeling can provide a constructive route for bipartisan cooperation and the democratization of computational modeling. Furthermore, open-source modeling and workflow encourages the widest forms of collaboration and contributions."
+ },
+ {
+ "objectID": "posts/2021-01-28-demo-day-how-to-use-og-usa.html#description-of-og-usa-model",
+ "href": "posts/2021-01-28-demo-day-how-to-use-og-usa.html#description-of-og-usa-model",
+ "title": "Demo Day: The OG-USA macroeconomic model of U.S. fiscal policy",
+ "section": "Description of OG-USA model",
+ "text": "Description of OG-USA model\nOG-USA is an open-source overlapping generations, dynamic general equilibrium, heterogeneous agent, macroeconomic model of U.S. fiscal policy. The GitHub repository for the OG-USA source code is github.com/PSLmodels/OG-USA. This repository contains all the source code and instructions for loading and running OG-USA and all of its dependencies on your local machine. We will probably do another PSL Demo Day on how to run OG-USA locally. This Demo Day webinar is about running OG-USA on the Compute Studio web application. See Section “Using OG-USA on Compute.Studio” below.\nAs a heterogeneous agent macroeconomic model, OG-USA allows for distributional analyses at the individual and firm level. That is, you can simulate the model and answer questions like, “How will an increase in the top three personal income tax rates affect people of different ages and income levels?” Microsimulation models can answer these types of distributional analysis questions as well. However, the difference between a macroeconomic model and a microsimulation model is that the macroeconomic models can simulate how each of those individuals and firms will respond to a policy change (e.g., lower labor supply or increased investment demand) and how those behavioral responses will add up and feed back into the macroeconomy (e.g., the effect on GDP, government revenue, government debt, interest rates, and wages).\nOG-USA is a large-scale model and comprises tens of thousands of lines of code. The status of all of this code being publicly available on the internet with all collaboration and updates also public makes this an open source project. However, it is not enough to simply post one’s code. We have gone to great lengths to make in-line comments or “docstring” in the code to clarify the purpose of each function and line of code. For example, look in the OG-USA/ogusa/household.py module. The first function on line 18 is the marg_ut_cons() function. As is described in its docstring, its purpose is to “Compute the marginal utility of consumption.”\nThese in-code docstrings are not enough. We have also created textbook style OG-USA documentation at pslmodels.github.io/OG-USA/ using the Jupyter Book medium. This form of documentation has the advantage of being in book form and available online. It allows us to update the documentation in the open-source repository so changes and versions can be tracked. It describes the OG-USA API, OG-USA theory, and `OG-USA calibration. As with the model, this documentation is always a work in progress. But being open-source allows outside contributors to help with its updated and error checking.\nOne particular strength of the OG-USA model I want to highlight is its interaction with microsimulation models to incorporate information about tax incentives faced by the heterogeneous households in the model. We have interfaced OG-USA with microsimulation models in India and in the European Commission. OG-USA ’s default for modeling the United States is to use the open-source Tax-Calculator microsimulation model, which was described by Anderson Frailey in the last Demo Day of 2020. However, DeBacker and I currently have a project in which we use OG-USA to simulate policies using the Tax Policy Center’s microsimulation model. The way OG-USA interfaces with microsimulation models to incorporate rich tax data is described in the documentation in the calibration chapter entitled, “Tax Functions”."
+ },
+ {
+ "objectID": "posts/2021-01-28-demo-day-how-to-use-og-usa.html#using-og-usa-on-compute-studio",
+ "href": "posts/2021-01-28-demo-day-how-to-use-og-usa.html#using-og-usa-on-compute-studio",
+ "title": "Demo Day: The OG-USA macroeconomic model of U.S. fiscal policy",
+ "section": "Using OG-USA on Compute Studio",
+ "text": "Using OG-USA on Compute Studio\nIn the demonstration, I focus on how to run experiments and simulations with OG-USA using the Compute Studio web application platform rather than installing running the model on your local machine. To use OG-USA on this web application, you will need a Compute Studio account. Once you have an account, you can start running any model available through the site. For some models, you will have to pay for the compute time, although the cost of running these models is very modest. However, all Compute Studio simulations of the OG-USA model are currently sponsored by the Open Source Economics Laboratory. This subsidy will probably run out in the next year. But we are always looking for funding for these models.\nOnce you are signed up and logged in to your Compute Studio account, you can go to the OG-USA model on Compute Studio at compute.studio/PSLmodels/OG-USA. The experiment that we simulated in the demonstration is available at compute.studio/PSLmodels/OG-USA/206. The description at the top of the simulation page describes the changes we made. You can look through the input page by clicking on the “Inputs” tab. We ran the model by clicking the green “Run” button at the lower left of the page. The model took about 5 hours to run, so I pre-computed the results that we discussed in the demo. The outputs of the experiment are available in the “Outputs” tab on the page. I also demonstrated how one can click the “Download Results” button at the bottom of the “Outputs” tab to download more results from the simulation. However, the full set of results is only available by installing and running the OG-USA model simulation on your local machine.\nThe benefits of the Compute Studio web application are that running the OG-USA model is much easier for the non-expert, and the multiple-hour computation time can be completed on a remote machine in the cloud."
+ },
+ {
+ "objectID": "posts/2021-01-28-demo-day-how-to-use-og-usa.html#resources",
+ "href": "posts/2021-01-28-demo-day-how-to-use-og-usa.html#resources",
+ "title": "Demo Day: The OG-USA macroeconomic model of U.S. fiscal policy",
+ "section": "Resources",
+ "text": "Resources\n\nPSL Demo Day YouTube webinar: “How to use OG-USA”\nOG-USA on Compute Studio\nSimulation from the demonstration\nOG-USA GitHub repo\nOG-USA documentation\nTax-Calculator GitHub repo"
+ },
+ {
+ "objectID": "posts/2021-12-08-demo-day-synthimpute.html",
+ "href": "posts/2021-12-08-demo-day-synthimpute.html",
+ "title": "Demo Day: Using synthimpute for data fusion",
+ "section": "",
+ "text": "Suppose a policy analyst sought to estimate the impact of a policy that changed income tax rates and benefit rules while also adding a progressive wealth tax. The standard approach is to use a microsimulation model, where the rules are programmed as code, and then to run that program over a representative sample of households. Unfortunately, no single US government survey captures all the households characteristics needed to analyze this policy; in particular, the reliable tax and benefit information lies in surveys like the Current Population Survey (CPS), while wealth lies in the Survey of Consumer Finances (SCF).\nAssuming the analyst wanted to start with the CPS, they have several options to estimate wealth for households to levy the progressive wealth tax. Two typical approaches include:\n\nLinear regression, predicting wealth from other household characteristics common to the CPS and SCF.\nMatching, in which each CPS household is matched with the most similar household in the SCF.\n\nNeither of these approaches, however, aim to estimate the distribution of wealth conditional on other characteristics. Linear regression explicitly estimates the mean prediction, but that could miss the tails of wealth from whom most of the wealth tax revenue will be collected.\nInstead, the analyst could apply quantile regression to estimate the distribution of wealth conditional on other characteristics, and then measure the effectiveness of the estimation using quantile loss.\nIn this Demo Day, I present the concepts of microsimulation, imputation, and quantile loss to motivate the synthimpute Python package I’ve developed with my PolicyEngine colleague Nikhil Woodruff. In an experiment predicting wealth on a holdout set from the SCF, my former colleague Deepak Singh and I found that random forests significantly outperform OLS and matching for quantile regression, and this is the approach applied in synthimpute for both data fusion and data synthesis. The synthimpute API will be familiar to users of scikit-learn and statsmodels , with the difference being that synthimpute ’s rf_impute function returns a random value from the predicted distribution; it can also skew the predictions to meet a target total.\nWe’ve used synthimpute to fuse data for research reports at the UBI Center and to enhance the PolicyEngine web app for UK tax and benefit simulation, and we welcome new users and contributors. Feel free to explore the repository or contact me with questions at max@policyengine.org.\nResources:\n\nsynthimpute package on GitHub\nPresentation slides\nUBI Center report on land value taxation in the UK, using synthimpute to impute land value from the UK Wealth and Assets Survey to the Family Resources Survey\nPolicyEngine UK carbon tax example, using synthimpute to impute carbon emissions from the Living Costs and Food Survey to the Family Resources Survey\nNotebook comparing random forests to matching and other techniques using a holdout set from the US Survey of Consumer Finances\nMy blog post on quantile regression for Towards Data Science, which laid the groundwork for synthimpute"
+ },
+ {
+ "objectID": "posts/2021-04-05-demo-day-stacked-revenue-estimates.html",
+ "href": "posts/2021-04-05-demo-day-stacked-revenue-estimates.html",
+ "title": "Demo Day: Producing stacked revenue estimates with the Tax-Calculator Python API",
+ "section": "",
+ "text": "It’s often useful to be able to identify the effects of specific provisions individually and not just the overall impact of a proposal with many provisions. Indeed, when revenue estimates of tax law changes are reported (such as this JCT analysis of the American Rescue Plan Act of 2021), they are typically reported on a provision-by-provision basis. Finding the provision-by-provision revenue estimates is cumbersome with the Tax-Brain web application both because it’s hard to iterate over many provisions and because the order matters when stacking estimates, so that one needs to keep this order in mind as parameter values are updated for each additional provision in a full proposal.\nIn the PSL Demo Day on April 5, 2021, I show how to use the Python API of Tax-Calculator to efficiently produce stacked revenue estimates. In fact, after some initial setup, this can be done with just 12 lines of code (plus a few more to make the output look nice). The Google Colab notebook that illustrates this approach can be found at this link, but here I’ll walk through the four steps that are involved:\n\nDivide up the full proposal into strings of JSON text that contain each provision you want to analyze. My example breaks up the Biden 2020 campaign proposal into seven provisions, but this is illustrative and you can make more or less provisions depending on the detail you would like to see.\nCreate a dictionary that contains, as its values, the JSON strings. A couple notes on this. First, the dictionary keys should be descriptive of the provisions as they will become the labels for each provision in the final table of revenue estimates we produce. Second, order matters here. You’ll want to be sure the current law baseline is first (the value for this will be an empty dictionary). Then you specify the provisions. The order you specify will likely affect your revenue estimates from a given provision (for instance, expanding/restricting a deduction has a larger revenue effect when rates are higher), but there are not hard and fast rules on the “right” order. Traditionally, rate changes are stacked first and tax expenditures later in the order.\nIterate over this dictionary. With a dictionary of provisions in hand, we can write a “for loop” to iterate over the provision, simulating the Tax-Calculator model at each step. Note that when the Policy class object in Tax-Calculator is modified, it only needs to be told the changes in tax law parameters relative to its current state. In other words, when we are stacking provisions, estimating the incremental effect of each, you can think of the Policy object having a baseline policy that is represented by the current law baseline plus all provisions that have been analyzed before the provision at the current iteration. The Policy class was created in this way so that one can easily represent policy changes, requiring the user to only input the set of parameters that are modified, not every single parameter’s value under the hypothetical policy. But this also makes it parsimonious to stack provisions as we are doing here. Notice that the JSON strings for each provision (created in Step 1) can be specified independent of the stacking order. We only needed to slice the full set of proposals into discrete chunks, we didn’t need to worry about creating specifications of cumulative policy changes.\nFormat output for presentation. After we’ve run a Tax-Calculator simulation for the current law baseline plus each provision (and each year in the budget window), we’ve got all the output we need. With this output, we can quickly create a table that will nicely present our stacked revenue estimate. One good check to do here is to create totals across all provisions and compare this to the simulated revenue effects of running the full set of proposals in one go. This check helps to ensure that you didn’t make an error in specifying your JSON strings. For example, it’s easy to leave out one or more provisions, especially if there are many.\n\nI hope this provides a helpful template for your own analysis. Note that one can modify this code in several useful ways. For example, within the for-loops, the Behavioral-Responses can be called to produce revenue estimates that take into account behavioral feedback. Or one could store the individual income tax and payroll tax revenue impacts separately (rather than return the combined values as in the example notebook). Additional outputs (even the full set of microdata after each provision is applied) can be stored for even more analysis.\nIn the future, look for Tax-Brain to add stacked revenue estimates to its capabilities. It’ll still be important for users to carve up their full list of policy changes into sets of provisions as we did in Steps 1 and 2 above, but Tax-Brain will then take care of the rest behind the scenes.\nResources:\n\nColab Notebook with example\nBiden campaign reform file in PSL Examples"
+ },
+ {
+ "objectID": "posts/2020-12-23-2020-year-in-review.html",
+ "href": "posts/2020-12-23-2020-year-in-review.html",
+ "title": "2020: A year in review",
+ "section": "",
+ "text": "This year has been one to forget! But 2020 did have its bright spots, especially in the PSL community. This post reviews some of the highlights from the year.\nThe Library was able to welcome two new models to the catalog in 2020: microdf and OpenFisca-UK. microdf provides a number of useful tools for use with economic survey data. OpenFisca-UK builds off the OpenFisca platform, offering a microsimulation model for tax and benefit programs in the UK.\nIn addition, four new models were added to the Library as incubating projects. The ui-calculator model has received a lot of attention this year in the U.S., as it provides the capability to calculate unemployment insurance benefits across U.S. states, a major mode of delivering financial relief to individuals during the COVID crisis. PCI-Outbreak directly relates to the COVID crisis, using machine learning and natural language processing to estimate the true extent of the COVID pandemic in China. The model finds that actual COVID cases are significantly higher than what official statistics claim. The COVID-MCS model considers COVID case counts and test positivity rates to measure whether or not U.S. communities are meeting certain benchmarks in controlling the spread of the disease. On a lighter note, the Git-Tutorial project provides instruction and resources for learning to use Git and GitHub, with an emphasis on the workflow used by many projects in the PSL community.\nThe organization surrounding the Policy Simulation Library has been bolstered in two ways. First, we have formed a relationship with the Open Collective Foundation, who is now our fiscal host. This allows PSL to accept tax deductible contributions that will support the efforts of the community. Second, we’ve formed the PSL Foundation, with an initial board that includes Linda Gibbs, Glenn Hubbard, and Jason DeBacker.\nOur outreach efforts have grown in 2020 to include the regular PSL Demo Day series and this PSL Blog. Community members have also presented work with PSL models at the PyData Global Conference, the Tax Economists Forum, AEI, the Coiled Podcast, and the Virtual Global Village Podcast. New users will also find a better experience learning how to use and contribute to PSL models as many PSL models have improved their documentation through the use of Jupyter Book (e.g., see the Tax-Calculator documentation).\nWe love seeing the community around open source policymaking expand and are proud that PSL models have been used for important policy analysis in 2020, including analyzing economic policy responses to the pandemic and the platforms of presidential candidates. We look forward to more progress in 2021 and welcome you to join the effort as a contributor, financially or as an open source developer.\nBest wishes from PSL for a happy and healthy new year!\nResources:\n\nPSL Twitter Feed\nPSL YouTube\nPSL on Open Collective"
+ },
+ {
+ "objectID": "posts/2021-05-17-demo-day-jupyter-book-deploy.html",
+ "href": "posts/2021-05-17-demo-day-jupyter-book-deploy.html",
+ "title": "Demo Day: Updating Jupyter Book documentation with GitHub Actions",
+ "section": "",
+ "text": "Open source projects must maintain clear and up-to-date documentation in order to attract users and contributors. Because of this, PSL sets minimum standards for documentation among cataloged projects in its model criteria. A recent innovation in executable books, Jupyter Book, has provided an excellent format for model documentation and has been widely adopted by PSL projects (see for example OG-USA, Tax-Brain, Tax-Calculator).\nJupyter Book allows one to write documents with executable code and text together, as in Jupyter notebooks. But Jupyter Book pushes this further by allowing documents with multiple sections, better integration of TeX for symbols and equations, BibTex style references and citations, links between sections, and Sphinx integration (for auto-built documentation of model APIs from source code). Importantly for sharing documentation, Jupyter Books can easily be compiled to HTML, PDF, or other formats. Portions of a Jupyter Book that contain executable code can be downloaded as Jupyter Notebooks or opened in Google Colab or binder\nThe Jupyter Book documentation is excellent and will help you get started creating your “book” (tip: pay close attention to formatting details, including proper whitespace). What I do here is outline how you can easily deploy your documentation to the web and keep it up-to-date with your project.\nI start from the assumption that you have the source files to build your Jupyter Book checked into the main branch of your project (these maybe yml , md , rst , ipynb or other files). For version control purposes and to keep your repo trim, you generally don’t want to check the built documentation files to this branch (tip: consider adding the folder these files will go to (e.g., /_build to your .gitignore ). When these files are in place and you can successfully build your book locally, it’s time for the first step.\nStep 1: Add two GH Actions to your project’s workflow: 1. An action to check that your documentation files build without an error. I like to run this on each push to a PR. The action won’t hang on warnings, but will fail if your Jupyter Book doesn’t build at all. An example of this action from the OG-USA repo is here:\n\nname: Check that docs build\non: [push, pull_request]\n\njobs:\n build:\n runs-on: ubuntu-latest\n steps:\n - name: Checkout\n uses: actions/checkout@v2 # If you're using actions/checkout@v2 you must set persist-credentials to false in most cases for the deployment to work correctly.\n with:\n persist-credentials: false\n\n - name: Setup Miniconda\n uses: conda-incubator/setup-miniconda@v2\n with:\n activate-environment: ogusa-dev\n environment-file: environment.yml\n python-version: 3.7\n auto-activate-base: false\n\n - name: Build # Build Jupyter Book\n shell: bash -l {0}\n run: |\n pip install jupyter-book\n pip install sphinxcontrib-bibtex==1.0.0\n pip install -e .\n cd docs\n jb build ./book\nTo use this in your repo, you’ll just need to change a few settings such as the name of the environment and perhaps the Python version and path to the book source files. Note that in the above yml file sphinxcontrib-bibtex is pinned. You maybe able to unpin this, but OG-USA needed this pin for documentation to compile property due to changes in the jupyter-book and sphinxcontrib-bibtex packages.\n\nAn action that builds and deploys the Jupyter Book to GH Pages. The OG-USA project uses the deploy action from James Ives in this action. This is something that you will want to run when PRs are merged into your main branch so that the documentation is kept up-to-date with the project. To modify this action for your repo, you’ll need to change the repo name, the environment name, and potentially the Python version, branch name, and path to the book source files.\n\nStep 2: Once the action in (2) above is run, your compiled Jupyter Book docs will be pushed to a gh-pages branch in your repository (the action will create this branch for you if it doesn’t already exist). At this point, you should be able to see your docs at the url https://GH_org_name.github.io/Repo_name . But it probably won’t look very good until you complete this next step. To have your Jupyter Book render on the web as you see it on your machine, you will want to push and merge an empty file with the name .nojekyll into your repo’s gh-pages branch.\nThat’s it! With these actions, you’ll be sure that your book continues to compile and a new version will be published to the web with with each merge to your main branch, ensuring that your documentation stays up-to-date.\nSome additional tips:\n\nUse Sphinx to document your projects API. By doing so you’ll automate an important part of your project’s documentation – as long as the docstrings are updated when the source code is, the Jupyter Book you are publishing to the web will be kept in sync with no additional work needed.\nYou can have your gh-pages-hosted documentation point to a custom URL.\nProject maintainers should ensure that docs are updated with PRs that are relevant (e.g., if the PR changes an the source code affecting a user interface, then documentation showing example usage should be updated) and help contributors make the necessary changes to the documentation source files."
+ },
+ {
+ "objectID": "posts/2021-07-16-demo-day-constructing-tax-data-for-the-50-states.html",
+ "href": "posts/2021-07-16-demo-day-constructing-tax-data-for-the-50-states.html",
+ "title": "Demo Day: Constructing tax data for the 50 states",
+ "section": "",
+ "text": "Federal income tax reform impacts can vary dramatically across states. The cap on state and local tax deductions (SALT) is a well-known example, but other policies also have differential effects because important tax-relevant features vary across states such as the income distribution, relative importance of wage, business, and retirement income, and family size and structure. Analyzing how policy impacts vary across states requires data that faithfully represent the characteristics of the 50 states.\nThis Demo Day described a method and software for constructing state weights for microdata files that (1) come as close as possible to targets for individual states, while (2) ensuring that the state weights for each tax record sum to its national weight. The latter objective ensures that the sum of state impacts for a tax reform equals the national impact.\nThis project developed state weights for a data file with more than 200,000 microdata records. The weighted data file comes within 0.01% of desired values for more than 95% of approximately 10,000 targets.\nThe goal of the slides and video was to enable a motivated Python-skilled user of the PSL TaxData and Tax-Calculator projects to reproduce project results: 50-state weights for TaxData’s primary output, the puf.csv microdata file (based primarily on an IRS Public Use File), using early-stage open-source software developed in the project. Thus, the demo is technical and focused on nuts and bolts.\nThe methods and software can also be used to:\n\nCreate geographic-area weights for other microdata files\nApportion state weights to Congressional Districts or counties, if suitable targets can be developed\nCreate state-specific microdata files suitable for modeling state income taxes\n\nThe main topics covered in the slides and video are:\n\nCreating national and state targets from IRS summary data\nPreparing a national microdata file for state weighting\nApproaches to constructing geographic weights\nRunning software that implements the Poisson-modeling approach used in the project\nMeasures of quality of the results"
+ },
+ {
+ "objectID": "posts/2021-01-29-demo-day-scf-microdf.html",
+ "href": "posts/2021-01-29-demo-day-scf-microdf.html",
+ "title": "Demo Day: Running the scf and microdf Python packages in Google Colab",
+ "section": "",
+ "text": "For Monday’s PSL Demo Day, I showed how to use the scf and microdf PSL Python packages from the Google Colab web-based Jupyter notebook interface.\nThe scf package extracts data from the Federal Reserve’s Survey of Consumer Finances, the canonical source of US wealth microdata. scf has a single function: load(years, columns) , which then returns a pandas DataFrame with the specified column(s), each record’s survey weight, and the year (when multiple years are requested).\nThe microdf package analyzes survey microdata, such as that returned by the scf.load function. It offers a consistent paradigm for calculating statistics like means, medians, sums, and inequality statistics like the Gini index. Most functions are structured as follows: f(df, col, w, groupby) where df is a pandas DataFrame of survey microdata, col is a column(s) name to be summarized, w is the weight column, and groupby is the column(s) to group records in before summarizing.\nUsing Google Colab, I showed how to use these packages to quickly calculate mean, median, and total wealth from the SCF data, without having to install any software or leave the browser. I also demonstrated how to use the groupby argument of microdf functions to show how different measures of wealth inequality have changed over time. Finally, I previewed some of what’s to come with scf and microdf : imputations, extrapolations, inflation, visualization, and documentation, to name a few priorities.\nResources:\n\nSlides\nDemo notebook in Google Colab\nSimulation from the demonstration\nscf GitHub repo\nmicrodf GitHub repo\nmicrodf documentation"
+ },
+ {
+ "objectID": "posts/2021-08-09-demo-day-unit-testing.html",
+ "href": "posts/2021-08-09-demo-day-unit-testing.html",
+ "title": "Demo Day: Unit testing for open source projects",
+ "section": "",
+ "text": "Unit testing is the testing of individual units or functions of a software application. This differs from regression testing that focuses on the verification of final outputs. Instead, unit testing tests each smallest testable component of your code. This helps to more easily identify and trace errors in the code.\nWriting unit tests is good practice, though not one that’s always followed. The biggest barrier to writing unit tests is that doing so takes time. You might wonder “why am I testing code that runs?” But there are a number benefits to writing unit tests:\n\nIt ensures that the code does what you expect it to do\nYou’ll better understand what your code is doing\nYou will reduce time tracking down bugs in your code\n\nOften, writing unit tests will save you time in the longer run because it reduces debugging time and because it forces you to think more about what your code does, which often leads to the development of more efficient code. And for open source projects, or projects with many contributors, writing unit tests is important in reducing the likelihood that errors are introduced into your code. This is why the PSL catalog criteria requires projects to provide at least some level of unit testing.\nIn the PSL Demo Day video linked above, I illustrate how to implement unit tests in R using the testthat package. There are essentially three steps to this process:\n\nCreate a directory to put your testing script in, e.g., a folder called tests\nCreate one or more scripts that define your tests.\n\nEach test is represented as a call of the test_that function and contain an statement that will evaluate as true or false (e.g., you may use the expect_equal function to verify that a function returns expected values given certain inputs).\nYou will want to use test in the name of these tests scripts as well as something descriptive of what is tested.\n\nCreate a script that will run your tests.\n\nHere you’ll need to import the testthat package and you’ll need to call the script(s) you are testing to load their functions.\nThen you’ll use the test_dir function to pass the directory in which the script(s) you created in Step 2 reside.\n\n\nCheck out the video to see examples of how each of these steps is executed. I’ve also found this blog post on unit tests with testthat to be helpful.\nUnit testing in Python seems to be more developed and straightforward with the excellent pytest package. While pytest offers many options for parameterizing tests, running tests in parallel, and more, the basic steps remain the same as those outlined above:\n\nCreate a directory for your test modules (call this folder tests as pytest will look for that name).\nCreate test modules that define each test\n\nTests are defined much like any other function in Python, the but will involve some assertion statement is triggered upon test failure.\nYou will want to use test in the name of these tests modules as well as something descriptive of what is tested.\n\nYou won’t need to create a script to run your tests as with testthat, but you may create a pytest.ini file to customize your tests options.\n\nThat’s about it to get started writing unit tests for your code. PSL cataloged projects provide many excellent examples of a variety of unit tests, so search them for examples to build from. In a future Demo Day and blog post, we’ll talk about continuous integration testing to help get even more out of you unit tests.\nResources:\n\ntestthat package for unit testing in R\npytest package for unit testing in Python\nPSL catalog criteria\nUnit tests for the capital-cost-recovery model"
+ }
+]
\ No newline at end of file
diff --git a/search/index.html b/search/index.html
deleted file mode 100755
index 3a2d2c5..0000000
--- a/search/index.html
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
-
-
-