• No results found

List of Documents:

N/A
N/A
Protected

Academic year: 2022

Share "List of Documents: "

Copied!
270
0
0

Loading.... (view fulltext now)

Full text

(1)

Indian Institute of Technology, Bombay Mumbai

March 2021

IIT Bombay - PoCRA MoU III

Phase IV - Delivery Report

(2)

List of Documents:

Sr. No. Document Report Page

Number (Bottom Center) Water

IT Reports

1 Technical Documentation of PoCRA Dashboard 3

2 Dashboard Code Description 5

3 MLP Script 9

4 Farmwater Handover Document 13

5 List of Interactions with PMU for IT Handover 20

6 Village Contingency MAP 21

7 Smooth Weather Skymet Data 27

8 Dashboard Delivery Note 32

9 IT Stack 65

Non IT Reports

10 Report on Rabi Contingency Planning 80

11 Note on Community Comprehension 113

12 Case Study - Final Closure Field Report 129

13 Update Note on NBSS 153

14 Note on GSDA IITB Collaboration 156

15 Note on Contingency Support 177

Energy

16 Phase IV Deliverables for Objective I 181

17 Phase IV Deliverables for Objectives H, J and K 221

(3)

Technical Documentation of PocRA Dashboard Handover

Prerequisite Software OS: Windows 10

Version Control: Git - 2.28.0 Backend

1. python 3.7 2. jre - 8u192 3. Geoserver 4. Postgresql-11 5. Osgeo4w Frontend

1. Yarn 1.22.0 2. Nvm 3. nodejs

Server: Apache (httpd-2.4.37) Software Architecture

The dashboard is implemented in two independent parts which are “Frontend” and “Backend”.

Frontend deals with the visualization aspect of the dashboard whereas backend supports the frontend by providing various APIs.

The frontend is developed in ReactJS & backend using flask.

To implement the same, separate git repositories are maintained.

Code Organization Backend

The backend is organized into two different sections viz ​boot_and_migrate &​pocragis_api.​The pocragis_api deals with providing APIs for the dashboard and other legacy apps developed by IITB. The ​boot_and_migrate ​facilitates installation & support aspects such as creating and populating the database, running the daily estimation model, etc.

Frontend

The frontend is a standard react app that connects to the dashboard backend using network API calls.

Installation

Provided that all the prerequisite applications are already installed, the installation can proceed in the following way.

(4)

Backend

1. Clone the backend repository.

2. Open the command prompt as administrator. Create and activate a virtual environment in python.

3. Cd to backend repository.

4. Run the command: ​python setup.py install 5. Cd boot_and_migrate\migrations

6. Create a directory named ​resources​ and ​exported_rasters.

7. Copy the ​Districts, Talukas, Villages, PoCRA_Clusters, PoCRA_Villages, Slope, Soil, LULC, Structures and Subdivisions​ zip files. Copy the

Consolidated_Rainfall_5years_processed_for_copy_to_db.csv ​file in the same folder.

(5)

Dashboard - Code Level Description

There are 2 important files to get things up and running on the dashboard viz setup.py and update.py in the boot_and_migrate/migrations folder.

1. boot_and_migrate/migrations/setup.py

This file aims to install and populate all the databases and other geoserver functionalities.

The execution of this file starts from __main__ and there are two flags (do/undo) that determine the action to be performed. When do = True, the code should create and populate the databases based on the functions uncommented in the do_order list. When do = False, it sets undo as True and the functions uncommented in undo_order start dropping or reversing the do actions.

Initial step:

Comment all the lines inside the do_order list and uncomment thecreate_dbfunction call in the if do: block. This will create the database with the name defined in the .env file along with some necessary extensions and tablespace.

After successful execution of create_db, the function call in the if do: block should be commented.

Once DB, tablespace, and extensions are in place, the execution of procedures defined in the do_order list can proceed one at a time i.e comment all the lines and start uncommenting and executing from top to bottom. Once the execution is successful, comment on the line again and uncomment the next line then repeat. If any error is encountered in a step, then that step should be undone by switching the doflag as False and uncomment the step to be undone in the undo_order list.

Brief description of all the do_order functions:

1. create_schema_adminregions:

Creates a schema named adminregions in the database. All the administrative shapefile data should go in the adminregions schema as a table.

2. load_admin_shapefiles:

Loads the shapefile into the Postgres/PostGIS table. The shapefiles should be present in the boot_and_migrate/migrations/resources folder. The shapefile names

and table names are configured in the

ADMINREGIONS_SHAPEFILE_LOAD_INFO variable.

3. corrections_to_loaded_admin_boundary_tables Some inconsistent naming is updated in this table.

4. generate_cluster_table

Create & populate the table adminregions.pocra_districts_all_clusters using the adminregions.pocra_district_all_villagestable.

5. generate_circle_table

(6)

Create & populate the table adminregions.pocra_districts_all_circles using the adminregions.pocra_districts_all_villages table.

6. generate_pocra_region_table

Create & populate the table adminregions.entire_pocra_districts_region using the table adminregions.pocra_districts table.

7. create_key_value_store

The key-value datastore is used for logging important dates, zooming-hierarchy, etc.

8. create_district_taluka_pocra_village_hierarchy_for_map_zooming

Used for district -> talukas -> villages mapping for zoom function in dashboard.

9. create_workspace

Creates geoserver workspace

10. create_postgis_datastore_adminregions

Creates datastore for adminregions in geoserver.

11. publish_admin_layers

Publishes the admin layers in geoserver 12. add_styles

Creates styles for various layers.

13. create_schema_weather

Creates schema named weather in the database.

14. create_tables_and_triggers_for_skymet_data,

Creates various triggers for the skymet data table. There ate triggers created which would sum up / aggregate the skymet hourly values into daily values.

15. create_postgis_datastore_weather

Creates a geoserver datastore for weather 16. create_schema_field:

Creates a schema named field in the database. Static data should go here 17. load_zipped_soil_shapefile:

Loads the zipped soil data present in the resources folder into the database.

18. load_zipped_lulc_shapefile:

Loads the zipped lulc data present in the resources folder into the database.

19. load_zipped_slope_geotiff:

20. Loads the zipped slope data present in the resources folder into the database.

21. create_schema_modeling

Creates the schema named modeling in the database. The rasters and other modeling related data should go in this schema.

22. create_and_populate_tables_for_maharain_data:

These are some legacy data for 2013-2017 maharain. Mainly used in Farmwater Android App.

23. create_and_populate_tables_for_maharain_voronoi:

(7)

Creates voronois based on the maharain data

24. create_schema_independent_raster_utility_functions:

Creates utility functions for raster related operations.

25. load_phase_1_2_3_zonefiles:

The pocra region is divided into 3 zones namely phase1, phase2 and phase3. This method loads those shapefiles into the adminregions database. Mainly used in MLP script.

26. correct_phase_file_entries:

Corrects the names in the phase files and inserts some missing data as well.

27. create_and_populate_PGDP_compliant_static_data_tiles_table:

Rasterizes and stores all the geometry tables 28. create_daily_skymet_station_raster_table:

Creates the raster table for daily skymet station. The skymet voronoi geometries changes over a period of time due to non-functional stations. These tables are referred to the changed skymet stations.

29. create_daily_estimation_results_tables:

Creates the table daily_estimated_parameter_raster for storing the daily estimation results and daily_regionwise_spatial_statistics_array for village level estimates.

30. setup_use_of_MLP_DB_through_postgres_fdw:

Foreign tables setup for MLP 31. create_postgis_datastore_mlp:

Datastore for MLP layers

32. load_zipped_structures_shapefile:

Loads the MLP structures shapefile

33. standardize_mlp_structure_names_in_table:

Data cleaning

34. publish_mlp_structures:

Publishing in geoserver

35. setup_use_of_FFS_DB_through_postgres_fdw:

Foreign tables setup for FFS 36. create_postgis_datastore_ffs

Datastore for FFS layers

37. create_tables_and_triggers_for_smoothed_skymet_data:

Creates tables and triggers for smooth weather. Currently used in MLP script but can be extended to Dashboard if necessary.

38. create_tables_smoothed_supporting_skymet_data:

Utility function for smooth weather 39. create_table_for_non_functional_stations:

Utility function for smooth weather

(8)

40. create_schema_uploaded_maps:

The schema for Map upload in the dashboard. Functionality restricted to PMU login.

41. create_postgis_datastore_uploaded_maps:

Datastore for uploaded maps in the dashboard.

42. create_uploaded_maps_table:

Creates a necessary table for uploading maps through UI.

Once the setup is done, then the task is to update the dashboard until the current date for a Rain year. This is achieved using update.py in the boot_and_migrate/migrations folder.

There are 5 main methods in the update.py file that should run on a daily basis using task schedulers in windows.

These are as follows:

1. update_data_in_weather_schema:

Updates the weather.skymet data table with the entries of that day fetched from skymet API. Smooth tables are also updated through this routine itself. Also updates weather indicators (daily/weekly/monthly/village-level)

2. update_skymet_station_data_id_raster_table:

Inserts an entry for the skymet raster in case the AWS Voronoi geometry is changed for a day. If the current day’s geometry is the same as yesterday then only valid_till date is updated.

3. update_daily_estimation_results:

Runs the model up to TILL_DATE and stores the results as a row in the raster table.

Also aggregates the results on village level for village-level estimates.

4. update_mlp_indicators_table_and_republish

Updates and republish the MLP table through foreign tables 5. update_ffs_entity_tables

Updates the ffs tables

6. update_ffs_indicators_table_and_republish Published the ffs indicators tables.

Pocragis_api:

In the pocragi_api folder, Flask APIs are written which reads and serves the data to various applications and the dashboard.

(9)

2020 Hourly Model MLP Script

Prepared By - Ashish Wankhade March 2021

LOCATION​: VM1 (e:\pocragis\MLP_script) LANGUAGE​: Python

OBJECTIVE​:

To compute the zone-wise water budget script on phase I, II, and III villages.

IMPORTANT TABLES​:

The script is derived from the dashboard architecture but the following are the important tables considered in the script.

1. Phase_1_clusters_zoning_updated 2. Phase_2_clusters_zoning

3. Phase_3_clusters_zoning

4. smoothed_skymet_data (Foreign table in smooth weather DB) 5. Skymet_data

6. pgdp_compliant_static_data_tiles

The script is also dependent on the cloud MLP database. The following tables are used from the MLP database:

1. plugin_model_output 2. plugin_crop_and_landuse 3. plugin_zone

STEPS TO RUN:

1. The script cannot be executed on local desktops because of IP whitelisting. It can only be executed on VM1 for the moment.

2. Run command prompt as administrator.

3. Cd to the script folder (e:\pocragis\MLP_script)

4. EDIT phase_X_clusters_zonewise_budgeting.py (X: 1, 2, 3) and add districts accordingly in REQUIRED_DISTRICTS.

5. RUN python phase_1_clusters_zonewise_budgeting.py for phase I villages and so on.

6. Output can be written to a local CSV or MLP database on the cloud depending on the requirements. Edits required in the script accordingly. Currently set for CSV mode and update to MLP cloud DB is commented at the moment which can be uncommented for inserting the zone-wise budget results in the MLP database. The query (commented) would update the columns on conflict i.e if data is already present for those rows although a check is added prior to model execution which prevents unnecessary execution if data is already present in the MLP DB.

(10)

OTHER IMPORTANT ASPECTS IN THE SCRIPT:

RESOLUTION:

To specify the resolution one has to edit pocra_estimate_improved_copy_for_phase_3.py and change the RESOLUTION_FOR_ZONEWISE_BUDGET variable (currently set to 200 on line number ~45-50). Low resolution incurs a high number of computing points which improves the accuracy at the cost of execution time (i.e trade-off between accuracy and time). Thus resolution should be selected appropriately based on the computing power and expected time for execution.

CROPS:

The script will run on all the 28 Kharif crops and 15 rabi crops. The crops are fetched from the MLP database​plugin_crop_and_landuse​. If a new crop is to be added, it should be present in the specified table and crop properties (​'kc', 'depletion_factor', 'root_depth'​) should be added in the lookup.py file for the newly added crop.

ESTIMATED TIME:

Depends on the size of districts/clusters.

For a cluster of size 6000 hectares and 200 Resolutions, the scripts may complete the execution in less than 40-60 mins for the hourly model.

TILL DATE:

To specify the TILL_DATE for model estimation:

Mention the date at:

weather_data =

WeatherDataHandler().get_daily_weather_data_for_pocra_model(...till_date = xxx) In the method ​def generate_zonewise_budget(self)

If the model is executed till monsoon instead of the entire year, then only the post-monsoon pet needs to be handled for the final results. For this, we have used default pet values for a crop from walmi. We subtract the calculated pet-monsoon from the default values which will be considered as the post-monsoon pet.

Make sure that the smoothed_skymet_data is updated till the TILL_DATE. To update it, double click the smooth_weather.py file in the smooth_weather_script folder.

A cronjob has been added already for updating smooth weather data.

SMOOTH WEATHER INTEGRATION:

The MLP script runs with smooth weather data where the NULL values in raw skymet data are replaced by the values of the nearest AWS station. This integration is essential in the hourly model as it is dependent on a larger number of weather parameters than that of the daily model (In the daily model we used to skip the points with “NULL” weather values). With smooth weather, the skipping part will never arise.

For any further changes to the weather table please refer to:

(11)

File: database_handling.py

Function: def get_skymet_daily_data_for_pocra_model(self, for_year) The current query for fetching the weather data is as follows:

SELECTd.id, s.rain, s.daily_temp_min, s.daily_temp_avg, s.daily_temp_max, s.temp_min, s.temp_avg, s.temp_max, s.rh_avg, s.wind_avg, d.lat, d.lon

FROM​​weather.skymet_data d

INNER JOIN​​weather.smooth_skymet_data s

ON​​d.rain_year = %(year)s AND d.rain_year = s.rain_year AND

((d.lat = s.lat AND d.lon = s.lon) or (d.district = s.district and d.taluka=s.taluka and d.rain_circle=s.rain_circle))

WHERE​​d.rain_year = %(year)s;

Since smooth weather is available from 2020 onwards and we already had the model results prior to that, it is advisable to run the script for the year >= 2020. The script won’t work for 2013-17 data because the weather for those years is not available at hourly granularity. For 2018 and 2019 the data needs to smoothen out and it should be present in the smooth_skymet_data table.

To run the script on geometries other than phase I, II, and III villages:

1. There must be a table in the pocra_gis DB containing ​Geom, Unicode, zone_name, district, mini_water​ attributes. Make sure all of these columns are not NULL.

2. Add these table as an entry in pgdp_compliant_static_data_tiles table using the command in Query tool (pgAdmin):

SELECT

populate_pgdp_compliant_static_data_tiles_with_geom_gid_tiles(​'geom_table_sc hema’'​, ​‘geom_table_name’'​);

Where geom_table_schema is the schema name and geom_table_name is the table name of the new table with geometries

3. Once the database tables are added successfully, edits may be required in the code accordingly so that the new table name is supplied to the script.

These edits will be required in phase_X_clusters_zonewise_budgeting.py (X: 1,2,3, etc.) and database_handling.py file.

Changes in the phase_X_clusters_zonewise_budgeting.py file will be as follows:

Replace the phase_X_cluters_zoning table with the new table in all the queries and arguments and comment the parts where the cloud MLP database is queried.

Changes in the database_handling.py file for adding a table named phase_1_clusters_zoning_updated​ will be as follows:

Add these key-value pair in the OBJECT_ACCESS_TABLES variable:

(12)

'phase_1_clusters_zoning_updated'​: { 'schema': 'public',

'table': ​'phase_1_clusters_zoning_updated'​, 'id_column_for_query': 'gid',

'columns': [

'gid', 'unicode', 'mini_water', 'district' ]

},

DAILY MODEL MLP SCRIPT(2013-19)

Prior to the hourly script, the PMU was handed over the daily model script for MLP estimation.

This script is present on VM1 as well.

The code architecture is exactly similar except for the water balance model part.

This script can be executed for 2013-20XX data.

(13)

Farm Water Balance Android App

Prepared by Ashish Wankhade

Objective:

1. To facilitate both hourly and daily water balance computation using a mobile application.

2. Computation of crop water productivity Technology: Android

Requirements: Android mobile phone, Farm water App, location services, and internet connectivity

Outputs: Water Balance Model results at a point level in graphical, pdf, and tabular format.

API Endpoints:

http://gis.mahapocra.gov.in/dashboard_testing_api_2020_09_08/farmapp/get_circle_data?lat=x

&lon=y

The above API is used for fetching the data at the hourly granularity for all the years(>=2018).

Steps to Run:

1. Install the Farm Water App.

2. Open the app. Make sure you have an active internet / wifi connection and the location service is On.

3. Search for the village name on the Map screen.

Click the next button with the marker at the point of interest.

(14)

4. Enter form details like farmer name, monsoon year, irrigation, yield, etc. Select the model (daily/hourly) using the radio button.

Click on “चालवा” once all the fields are entered.

5. The model will run and 3 tabs will be shown with graphs and tables depicting the results

6. On clicking “जतनकरा”, a pdf file will be generated on your file system.

(15)

Water Balance Summary and Graphs

Sample report: Following is a sample report from the farm water balance app. Graphs, equations, and ranges of values for different parameters in graphs are explained here for the use of developers.

(16)

Monsoon End Summary:

The monsoon end values for water balance parameters depict a summation of daily values. The point level water balance model provides bifurcation of rainwater into its components - runoff, GW recharge, soil moisture, and AET (Actual Evapotranspiration - water available to crop). This bifurcation depends on geographical properties like slope, drainage, soil type, soil depth, and crop sown - PET crop demand, etc.

The water balance fits the following equation whether measured daily or cumulative - Rainfall = runoff + soil moisture + AET + GW recharge

So the individual values of these parameters also remain less than rainfall.

Example

Agricultural year: June 2019 to May 2020

Monsoon end values: summation of daily values (cumulative) from 1st June 2019 to ‘monsoon end date’ selected by the user in farm-level app

Sr.

no

Parameters - unit in mm Monsoon end duration

Crop end duration

1 Rain 1stJune to monsoon

end date

1stJune to crop end date

2 Runoff 1stJune to monsoon

end date 1stJune to crop end date

3 (Ground Water) GW recharge 1stJune to monsoon end date

1stJune to crop end date

(17)

4 Soil Moisture 1stJune to monsoon

end date 1stJune to crop end date

5 AET (Actual Evapotranspiration) Sowing date to monsoon end date

Sowing date to crop end date

6 PET (Potential Evapotranspiration or Crop Water Demand)

Sowing date to monsoon end date (because PET begins from sowing date, AET depend on PET)

Sowing date to crop end date

Graphs:

The graphs plotted are for the duration 1stJune to the current date for the selected agricultural year. All parameters are plotted at ‘daily level’.

Graph 1: Rainfall / Irrigation - AET-PET graph

1. All parameters are plotted at daily level in this graph and their unit is mm.

(18)

2. Following is the range of values for parameters - Daily PET = 3 to 6 mm

Daily AET = 3 to 6 mm

3. Daily Rainfall values cannot be predicted. The graph automatically gets adjusted according to the highest value.

Graph 2: Rainfall/irrigation - deficit - runoff - GW recharge graph 1. Here the parameters are taken in following manner

a. Rainfall - daily b. Runoff - cumulative c. GW recharge - cumulative d. Deficit = PET-AET - cumulative

2. The cumulative value is a summation of all values till date and it shows the total stock till date as in summary report.

(19)

3. Here also just like the range of rainfall values is not predictable, runoff, gw recharge and total deficit is also not predictable. But they will be less than the total rainfall following the given water balance equation.

Rainfall = runoff + soil moisture + AET + GW recharge Interpretation of Graphs:

These graphs are helpful in understanding the deficit faced by the crop both in terms of quantity and its temporal spread. Based on the timing of the deficit faced by the crops, app users can schedule irrigation to meet the crop PET. Similarly, GW and Runoff are helpful to understand if the required water to meet the crop PET is available from either of these components, either GW or Runoff or collectively from both.

Therefore these graphs are helpful to understand

➔ Did crops meet PET or faced crop water deficit

➔ When to irrigate crops if there is deficit

➔ Is sufficient water available for providing irrigation to meet crop PET

(20)

IT Handover Note

The IT part of IITB-PoCRA MoU III has been handed over to the PMU IT team. Following is the list of tools handed over along with meetings/ interactions conducted for the same.

IT Tools/ source code/ repositories:

1. Dashboard 2. Farm level App

3. Point water balance model QGIS Plugin 4. Soil survey app

5. MLP backend water balance generation script 6. Python script for point model

Repositories and documents have been shared here:

https://drive.google.com/drive/u/0/folders/1l4RLpFNG6n-hAkezts-ibK3a5y6czaeS List of Interactions with PMU for IT Handover

1. Meeting at PMU Office:14th- 15thDecember 2020

Deliveries: Water budget concept, inputs preparation for MLP, MLP water budget,Walkthrough Farm Water Balance App code, Soil Survey App code, Trying out dashboard setup with Mr. Deepak

2. Zoom Session: 22ndDecember 2020

Farm Water Balance App and Soil survey app queries resolution and handover 3. Meeting at PMU Office: 23rd- 24thDecember 2020

Deliveries: Dashboard and Plugin main functions and code walkthrough, initiation of dashboard and plugin set-up/installation and walkthrough, FFS implementation and issues discussion

4. Zoom Session: 4thJan 2021

Deliveries: QGIS plugin handover and queries resolution 5. Multiple Zoom / Anydesk sessions for error resolutions

Deliveries: Dashboard setup on local machine of developer Queries raised have been explained/ addressed through meetings.

(21)

Village Contingency MAP

Prepared By: Sharad Kumar Reviewed By: Shubhada Sali

March 2021

What is village contingency?

Village contingency is all those weather circumstances which can occur at village level and lead to crop losses or damage.In our village level contingencies we have tried to draw some contingency which are related to village and only depend on weather (rain,temp,wind ,etc) of any village.

The weather data is naturally available at circle level. In order to enable mapping of contingencies at village level a smart database is created by linking villages to the nearest weather station. This master village - AWS linkage database will get updated every year on May 31​st​. So that any new stations or dropped weather stations are accounted for that year.

Table 1: API, Database and Scripts on VM1

Getting weather for any particular village:

As we know that right now weather is recorded circle-wise which covers more than one village in that region.

To get weather data for any point in PoCRA region what can be done is to make voronoi polygons taking lat/lon of skymet station as seed/generator of the voronoi polygon. This method is most effective if we want to get data for any point in the region.

Sr. No. On VM1 Link

1 API -

1. NA 2. NA

2 Database

1. weather.weekly_village_data

2. weather.weekly_village_weather_indicators

Both get created at run time in the weather schema.

3 Scripts 1. NA 2. NA

(22)

If we want to get weather data for any shape ( like village ) then if we go by voronoi polygon method then there is a possibility that some portion of that region lies in one voronoi polygon and remaining portion lies in another polygon, there is a possibility that a single village may lie in more that two polygons or multiple polygons.As we can see in the below figure a village lies in two different polygon.

Figure 1: Illustrative village that lie in two different polygons

To tackle this problem we have come with a solution to map each village with some skymet station to get a unique value of rainfall for entire region of the village.To achieve our this goal we have have mapped each village with its nearest skymet station, in this way skymet to village mapping will be a one-to-many mapping, and a skymet will cover a region of many village.

To map a village with the nearest skymet station we need a reference point in the village so we have taken the centroid of the village as a reference point for the whole village to calculate distance from the skymet station.

Making required database to draw contingency map Before making any map for dashboard display we need 2 things:-

1) A geometry either point/shape to display over dashboard

2) A parameter linked to that geometry based on which we style our map

We do have the first requirement fulfilled as we have the geometry of the village in the pocra_districts_all_villages table of adminregions schema in our database.so from this table we get our shape which gets displayed over the dashboard.

(23)

To fulfill our second requirement we have used our village to skymet station mapping.we have firstly made a table which contains weather value and contingencies like if rain was < 30 in last week , total rainfall in last week etc this table is made skymet circle wise.

Using the above table of skymet circle with contingency we do an inner join on the table of village to skymet circle on the basis of skymet circle code.which gives us a table which has village level contingency and weather data.

Below table’s (weeky_village_aws_weather) attribute are the attributes of table which we make for each village with contingency.

Here each village does have a row.

vincode character varying COLLATE pg_catalog."default", vil_name character varying COLLATE pg_catalog."default", district character varying COLLATE pg_catalog."default", taluka character varying COLLATE pg_catalog."default", region character varying COLLATE pg_catalog."default", village_lon numeric,

village_lat numeric, aws_id integer, aws_lat numeric, aws_lon numeric,

aws_circle_name character varying , id integer,

lat numeric lon numeric,

total_rain_in_week real, avg_daily_rain_in_week real, min_temp_in_week real, max_temp_in_week real, avg_rh_in_week real, avg_wind_in_week real, rain_150 boolean

Mapping each entry of village weather/contingency table to village geometry

Our next step in the process of village level contingency map making is linking each row of the table to some geometry. In this process each row will have a geometry to it so that its data can be displayed over the dashboard.

In our case we have made a inner join between the table pocra_districts_all_villages and table weeky_village_aws_weather on the basis of vincode which is village code and we add a new column geom in the table by this each row has a geometry attribute associated to it.

(24)

MAP DRAWING:

This is the final step in this section of document there will be some actual code snippets which will help develop understanding map making using geoserver and python.

1) Collecting required data:To draw maps we need to have data with geometry which will be displayed to the user so to collect required data we run some query on the weather data and collate them according to our requirement . over the dashboard in update.py we can find a function ​(update_skymet_weekly_village_indicators) ​which does it all for village contingency.

2) Mapping that data to of weather to each village using the village-aws link table:

We map the weather table which we have made for contingency to each village using village-aws link in this way each village will have its own row of the weather parameters.

Like we can see in this postgres query SELECT t2.*,t1.*

INTO weather.weekly_village_aws_weather

FROM weather.weekly_village_weather_indicators as t1

INNER JOIN weather.aws_village_linkage as t2 ON t1.id = t2.aws_id;

Here we are making a new table using two previously existing tables which are weather.weekly_village_weather_indicators ( which has weather value for the contingency at village level ) ​and other is weather.aws_village_linkage (​it does have village aws linkage ​)​ and we join then based in the aws id which is the weather circle ID .

3) Mapping village weather table to village geometry : -

Once we have village level weather values from the previous steps we need to map those values to some geometry. As village is a polygon geometry and it is already available in adminregions schema , so our task will be just to map those geometry to the required entry of the village for correct map display.

As we can see in the below Query which is linking village geometry to each entry of the village weather table based on the code of the village.

SELECT t1.*,t2.geom

INTO weather.weekly_village_data

FROM weather.weekly_village_aws_weather as t1

INNER JOIN adminregions.pocra_districts_all_villages as t2 ON t1.vincode = t2.vincode;

(25)

After 1-3 steps we do have weather table village-wise a geometry attribute into it so the next step will deal with putting this table over geoserver as a map and also adding a style to it.

One thing that we have to keep in mind is that in this case we have a pre-maid geometry which we have used. If we do not have geometry then we need to make a geometry for map display like in many cases we will have only lat/lon of a point and that point might represent a large area or that point need to be displayed only in both the case we need to make a geometry either a polygon or point geometry using postgis query tool.

4) Putting table data as a map over geoserver:-

In this step we have to use our table with geometry and have to put that as a map on

geoserver if we look in below shown code then we can get an idea of how table data is posted on geoserver as a map layer.

def post_layer_if_not_exists(layer):

a. if layer['name'] not in existing_layers:

i. res = requests.post(

1. url='/'.join([

2. config.GEOSERVER_BASE_URL,

3. 'workspaces', GEOSERVER_WORKSPACE_NAME, 4. 'datastores', 'pocragis_weather',

5. 'featuretypes' 6. ]),

7. json=layer['info_json'],

8. auth=(GEOSERVER_USER,GEOSERVER_PASSWORD) )

If we go through this snippet of code we can see that it is taking the json data (line 7) of that layer and putting that json data over geoserver in ​datastore pocragis_wether​which we have made in the setup phase of the dashboard.

But a question which needs to come in our mind is ​where we got this json value we only had a table in postgis with some attributes and a geometry attribute.

So answer to this question is we have to create this json object and we are doing it into a separate step which has been executed before this step which is shown below : -

'info_json': ​get_POLYGON_FEATURETYPE_INFO_json​( 'pocragis_weather', 'weekly_village_data', 'weekly_village_weather_indicators' )

Function ​get_POLYGON_FEATURETYPE_INFO_jsonconverts a polygon geometry into a json object for more depth you can refer to code update.py over dashboard.

(26)

Similarly for point geometry there is another function get_POINT_FEATURETYPE_INFO_jsonwhich will convert point geometry into json objects so that it can get published on geoserver.

Following all above steps we can get our contingency map or any other map (involving polygon/point geometry ) published over the dashboard and from there we can serve that map to the dashboard.

(27)

Smooth Weather Skymet Data

Prepared By: Sharad Kumar Reviewed By: Shubhada Sali

March 2021

Why smooth weather?

Occasionally skymet API reports null or missing values for some stations in response to the request. As a result, the actual weather value for that region which skymet covers is not known to the developer or user. So we came up with a correction for those missing values and have made a new weather database over the dashboard which is smooth weather skymet data.

Below images show a basic use case of smooth weather data In those images we can see that some of the intermittent red spots seem to have reduced in image (b) as we have used smooth weather for image (b) to draw the voronoi map, and image (a) voronoi map is drawn on the actual weather which has been reported by skymet which contains missing values too.

Figure (a): Without_correction_skymet_june_rainfall

(28)

Figure (b): After_correcting_skymet_june_rainfall Use cases:

Firstly, we need to discuss use cases of this database which we have created and later sections will explain the process of database making and other databases which get created along with it.

1) A database which does not have any missing values of weather:

Smooth weather database corrects all missing values of the weather by borrowing values from its nearest neighbour which has values available for that period.In this way it corrects all the missing values so that we do not have any problem in computing other features/models which depend on weather values.

2) To track which stations were non-functional over a period of time:

This database also keeps a record of all the corrected values, and which rain circle has given that value, and also a database is maintained to keep a record of all those rain circles which have any missing value in the period of 24 hours (that is a day). Using all these functionalities we can report all those circles which have not recorded values and further we can find the reason.

How smooth Weather database table is created:

Smooth weather database is very similar to the skymet weather database which is visible over the dashboard.Only difference in attribute of these two database is smooth weather contain one more attribute which is borrowed_recoreded it stores information about that stations hourly data is recorded or it has been borrowed from any nearest station of it .

(29)

Schema of smooth weather table:

CREATE TABLE weather.smoothed_skymet_data (

id integer NOT NULL DEFAULT lat numeric,

lon numeric,

district character varying, taluka character varying , rain_circle character varying , rain_year integer,

data_fetched_upto_rain_doy integer, rain real[],

temp_min real[], temp_avg real[], temp_max real[], rh_min real[], rh_avg real[], rh_max real[], wind_min real[], wind_avg real[], wind_max real[],

recorded_borrowed integer[], daily_rain real[],

daily_temp_min real[], daily_temp_avg real[], daily_temp_max real[], daily_rh_min real[], daily_rh_avg real[], daily_rh_max real[], daily_wind_min real[], daily_wind_avg real[], daily_wind_max real[],

CONSTRAINT smoothed_skymet_data_pkey PRIMARY KEY (id),

CONSTRAINT smoothed_skymet_data_lat_lon_rain_year_key UNIQUE (lat, lon, rain_year), )

(30)

In Smooth weather table as we can see that there are columns for both hourly and daily.

Skymet API gives value of weather on the basis of each hour so we correct in our table on the basis of hourly data and using triggers we combine them to form daily data. To know about those triggers the reader has to go through the code of schema as those are big lines of code.code for making this table is present is setup.py part of the backend_clone.

Relative path to setup.py is/backend_clone/boot_and_migrate/migrations/setup.py With respect to the backend_clone folder.

We need to know 2 database table which we need in the process of making smooth weather table for weather data:

1) Auxiliary Weather Table:This table update weather value which skymet api gives daily as hourly weather data this is different from skymet data table in only one way that it makes an entry from all the stations whether they contain any information or not.

Schema of this table is the same as the smooth weather table but it does not have any daily portion as this table is just an auxiliary table which we have used to take help in the process of correction of missing values.

2) Nearest AWS Link Table: This table (weather.aws_nearest_linkage) simply keeps a link of 10 nearest aws stations to all the aws stations present in PoCRA districts, and later this link is used to update missing values in the smooth weather table.

Table 1: Examples of 10 nearest AWS station to a single station.

To create/update the smooth weather database table we have firstly created an auxiliary database which updates all weather data for a particular day irrespective of the value is missing or not if value is missing it will put simply null inplace of that.After this smoothed weather table is updated according to the values of skymet data for that day and if any value if found to be missing it get borrowed from nearest station which has value for that hour.

Below sudo algorithm explains creation of different tables and its updation in the smooth weather database:-

Steps:

Part 1: Tables creation

Auxiliary skymet weather and smooth skymet weather tables get created by running setup script over dashboard this is just one time process.

(31)

Part 2: Table updation

a) Daily weather value form skymet is fetched using skymet api

b) Weather value is updated in the auxiliary table for every skymet station as reported by skymet .

c) Smooth weather table updation each station’s hours

i) For every hour of stations which has missing value a request is made to auxiliary skymet table provide value

ii) Using nearest aws link table we find out which nearest station has value for that hour

iii) We update that value in the smooth weather table and in borrowed_recored column we mark that hour as borrowed and use a positive integer number where the value of that number represents the nth nearest whose value is borrowed d) repeat step c for each station

Part 3: Village AWS Link Table Creation

A link of each village to the nearest aws station is created and a database table is made for the same.This table will be later used in creating contingency maps for village level contingency maps using smooth weather.this table does not affect updation of smooth weather table in any way.

(32)

Dashboard Delivery Note

Prepared By - Shubhada Sali

March 2021 IIT Bombay

Mumbai

(33)

Table of Contents

1. Introduction 3

2. Databases on Dashboard 4

3. Role Based Access 11

4. Data Download 12

5. Graphs 16

6. Shapefile Upload Feature 22

7. Features and Layout/ Other Suggested Changes 27

List of Figures

Figure 1 dashboard snapshot 3

Figure 2 Weather data drop down 4

Figure 3 Weather indicators 5

Figure 4 Average Temperature on date as a sample map 5

Figure 5 Data availability map for Skymet stations 6

Figure 6 Crop water deficit map for cotton, with modification window for legend 6

Figure 7 crop water balance indicators list 7

Figure 8 PoCRA villages coloured as per phases 8

Figure 9 List of MLP water budget indicators on dashboard 8

Figure 10 Rabi area percentage in village as a sample indicator map 9

Figure 11 NRM interventions as on 2019 10

Figure 12 NRM interventions map zoomed to phase II Beed district villages 10

Figure 13 FFS sample indicators 11

Figure 14 Data tab with login for role-based access 12

Figure 15 PMU login 12

Figure 16 Download AWS data section with selection drop down for year, district, taluka, circle 13 Figure 17 Downloaded daily AWS data for selected circle for selected agricultural year 13 Figure 18 AWS Data download in CA Login with only selected circles under his purview available to

him 14

Figure 19 Village level download in PMU Login with selection drop down for year, district, taluka,

village, crop 14

Figure 20 Downloaded village level estimate file 15

Figure 21 Village Level estimate data download in CA login for villages under his purview 15

Figure 22 AWS Graph section 16

Figure 23 PMU Login District level graph of ‘Rainfall on date’ showing data for all circles in district 17

Figure 24 PMU Login District level graph of ‘Rainfall last week’ showing data for all circles in

district 17

Figure 25 AWS for CA Login showing only one circle present in CA’s cluster in the dropdown 18 Figure 26 CA Login Cluster level graph of ‘Rainfall on date’ showing data for all circles in cluster 18 Figure 27 CA Login cluster level graph of ‘Rainfall last week’ showing data for all circles in cluster19 Figure 28 PMU login – selection of district/taluka/village/crop for village level estimate graph 19

Figure 29 AET-PET-Rainfall graph for Alewadi village cotton crop 20

Figure 30 Cumulative ground water (GW) – Cumulative runoff – rainfall graph 20

(34)

Figure 31 CA login Village level estimate graph, with drop down list of villages assigned to that CA 21

Figure 32 Shapefile upload feature with instructions in PMU login 22 Figure 33 Shapefile upload step – provide map title, select shapefile from local machine in required

format and submit 22

Figure 34 select shapefile attribute (indicator variable) from drop down list to be displayed as map 23 Figure 35 selecting indicator variable and providing unit to be displayed on map 23 Figure 36 new point map with given title available for display in ‘uploaded maps’ section, it is

displayed in this figure. 24

Figure 37 Upload polygon shapefile in EPSG 32643 CRS – zip file with correct naming conventions 24

Figure 38 select indicator variable and set variable 25

Figure 39 newly uploaded polygon map shown on dashboard 25

Figure 40 select map to be removed 26

Figure 41 pop up after removing map from dashboard 26

Figure 42 Updated upload map section 27

Figure 43 Legend control made dynamic, pops up when user hovers over ‘Legend Control’, range can

be changed by clicking on ‘+’ 27

Figure 44 Layers panel to change visibility of layers or remove them from display 28 Figure 45 Shows how to remove layer from display using Layers panel 28 Figure 46 Zoom feature is available from ‘Z’ icon on left hand side – modified as per PMU

suggestions 29

Figure 47 Information tab made more readable by marking selected point ‘x’ and bifurcating

information into sections 29

Figure 48 sectional information can be seen by clicking on that section – it drops down to show

information 30

Figure 49 FFS information tab 31

Figure 50 MLP data 31

Figure 51 Nearest structure name and location shown in MLP data section 32

Figure 52 Location information of the selected point 32

(35)

1. Introduction

The GIS dashboard has been upgraded from its initial basic version to serve multiple functionalities as delineated in MoU III. This document builds upon the earlier reports submitted for Dashboard. This illustrates the updates and newly added functionalities.

Following is the list of updates made on Dashboard 1. Functional Changes

1. Addition of Database

a. All layers – Village, Taluka, District are updated using the ones given by PMU, their viewing on zoom and zoom out has been updated

b. Structures layer added c. MLP data incorporated

d. Dynamic sample indicators from FFS dataset shared by PMU are incorporated 2. Role Based Access – for PMU, DSAO, CA, AA

3. Data Download 4. Inclusion of Graphs

5. Shapefile Upload Feature for PMU 2. Layout Changes

1. Left Panel Changes

2. Overall Front end display as suggested by PMU.

The dashboard can be found on this link (VM1) : http://gis.mahapocra.gov.in/dashboard_handover/

Figure 1 dashboard snapshot

(36)

This is the updated layout of the dashboard with the data panel moved to the left side. The databases available for mapping on dashboard can be seen in this left panel. They are as follows –

2. Databases on Dashboard

1. Weather

Weather database is plotted using real time skymet data from skymet API. This database is made available with two days lag and it is spatially aggregated using Voronoi polygons for skymet station location. Further to this a ‘smooth skymet data’ is also maintained by allocating data of nearest working station to a station which is down (not working), in order to provide a smooth map.

Figure 2 Weather data drop down

Various weather indicators are made available for following temporal scales a. Daily indicators

b. Weekly indicators c. Monthly indicators d. Cumulative indicators

This can be seen in figure 2 drop down for weather in the left panel.

(37)

Figure 3 Weather indicators

Relevant indicators for a. Rainfall (mm)

b. Temperature (degree celcius) c. Humidity (%)

d. Wind speed are included in each temporal section.

A smooth map has been prepared by plotting Voronoi polygons for skymet stations. A smooth skymet database has also been prepared from raw database by assigning nearest working station’s data to non working stations. Figure 4 shows a sample map of average temperature on date 18-12-2020. This data is updated with 2 day lag on this dashboard, to accommodate for modifications made by skymet later.

Figure 4 Average Temperature on date as a sample map

Similarly, Skymet ‘data availability’ indicators have also been added in ‘Automatic Weather Stations’

option of drop down.

(38)

Figure 5 Data availability map for Skymet stations

The red stations denote those which turned off at some point during the current agricultural year starting from June 1 st 2020 till date. The blue ones denote those stations which are consistently working (never turned off) since June 1st​ 2020.

2. Crop Water Balance Estimates

Water Balance Model estimates have been shown for two crops - cotton and soybean for the current agricultural year under this section.

Figure 6 Crop water deficit map for cotton, with modification window for legend

Cumulative Soil moisture, deficit and AET have been shown under this section.

(39)

Figure 7 crop water balance indicators list

The cumulative values are counted from June 1 stof a given year till date. The value of all water balance parameters is in the unit of mm.

Water Balance parameters definition

a. Total PET – this is the total crop water demand and varies depending on crop.

i. Total PET Cotton – 750-800 mm ii. Total PET soybean – 300-400 mm

b. Total AET – this is the water taken by crop (available to crop, through rainfall via soil moisture) to satisfy its PET requirements.

c. Total soil moisture deficit = Total PET – Total AET

d. Available Soil moisture – total moisture available in soil layer 1, which is the root zone of crop.

The default scales for these parameters have been defined on the dashboard. For example, the AET will keep on increasing starting from June first and the scale in legend should be modified by the user based on current date. The scale can be modified by clicking on the

‘+’ icon beside the legend.

3. Micro Level Planning – Water Budget

Important sample Water Budget indicators have been added from MLP app database to the dashboard.

These indicators are village level water budget indicators showing actual and planned status.

(40)

Figure 8 PoCRA villages coloured as per phases

The PoCRA villages are shown phase wise.

Figure 9 List of MLP water budget indicators on dashboard

The explanation on each of these indicators is as follows –

a. Actual Storage Capacity (mm) – This is the existing storage capacity from NRM interventions in the village, converted from TCM to mm by using standard formula – Capacity in mm = (Capacity in TCM (Thousand cubic meter) * 100) / Village area in hectare

Default value – 0 – 25 - 50 mm

b. Proposed storage capacity (mm) – This is the storage capacity from planned NRM interventions in the village, which are fed into the water budget section of MLP app.

Default value – 0 – 25 - 50 mm

c. NRM index = Planned storage capacity in village / runoff available for impounding in village.

Default values – 0-0.5-1 (this is ratio, doesn’t have unit)

(41)

d. Water balance in mm for average year – obtained from water balance results of MLP app for average rainfall year for that village during microplanning. The TCM value is converted to mm by using the formula given in point a.

Default values (-400) – (-200) – (50)

e. Water balance in mm for current year – This is also obtained from water balance results of MLP app for the latest rainfall year available for that village during microplanning.

Default values (-400) – (-200) – (50)

f. Rabi area % = area under rabi crops in that village *100 / agricultural area in that village.

Default values 0-30-60 %

Figure 10 Rabi area percentage in village as a sample indicator map

This section also contains structure data. This is a static shapefile showing location of existing NRM structures as in year 2019 in project villages. This map available on the dashboard is shown in figure 11.

(42)

Figure 11 NRM interventions as on 2019

Figure 12 NRM interventions map zoomed to phase II Beed district villages

4. Uploaded Maps

This section contains maps added by PMU, to be made available for access to public. A very rudimentary shapefile upload functionality has been developed for PMU experts to add required maps to the dashboard without dependency on the developer. This is a role based functionality available only for PMU role in the dashboard. This functionality has a number of limitations and scope for further development. Some limitations are mentioned below –

1. Currently this is available only for point and polygon layers. It can be extended for line layers in future.

(43)

2. It has been designed to display only numeric - integer and decimal attributes. It can be extended to display categorical attributes by PMU IT team later.

3. User must upload shapefile in correct CRS a. Point shapefile – EPSG 4326 b. Polygon Shapefile EPSG 32643

4. File and folder naming conventions – special characters other than ‘_’ not allowed, space not allowed, file and folder name must match.

5. Attribute naming convention – spaces, special characters not allowed in attribute name. if such names are there, its data won’t get displayed.

5. Farm Field Schools

Farm Field Schools is an activity being conducted in PoCRA villages and its data is being gathered through the FFS app developed by Runtime. Sample indicators from this data have been made available dynamically on the dashboard. Number of Data integrity and linkage issues are present in this data, efforts were made to discuss and rectify them and a note for the same has been shared.

Considering this only few feasible indicators could be shown on the dashboard, which are as follows.

Figure 13 FFS sample indicators

These indicators are shown season wise based on the season captured in FFS.

3. Role Based Access

Role based access has been provided in ‘ Data’ tab of the dashboard. This has been provided for following roles –

a. PMU – all 15 project districts b. DSAO – assigned district

c. Cluster Assistant (CA) – assigned PoCRA Cluster

(44)

d. Agriculture Assistant (AA) – assigned villages

Functionalities and download / view access to the restricted geographical region under respective role’s purview have been defined using role-based access. These functionalities will be illustrated in next chapters.

Figure 14 Data tab with login for role-based access

4. Data Download

Following datasets have been made available for download in each role for their assigned geographical region. Data is available from 2020 -2021 agricultural year and should be updated for coming years.

1. AWS data for skymet circles - daily

2. Village level estimate - Water Balance Model output for cotton and soybean (seasonal cumulative till date for current agricultural year (June to May)

Figure 15 PMU login

(45)

Figure 16 Download AWS data section with selection drop down for year, district, taluka, circle

Downloaded AWS data for selected circle

Figure 17 Downloaded daily AWS data for selected circle for selected agricultural year

The same example for AWS data download is shown for Cluster Assistant Login in figure 18. The cluster assistant can download data only for AWS circles linked to his villages in the cluster assigned to him.

(46)

Figure 18 AWS Data download in CA Login with only selected circles under his purview available to him

Village level Estimate Download

Figure 19 Village level download in PMU Login with selection drop down for year, district, taluka, village, crop

(47)

Downloaded Village level Estimate

Figure 20 Downloaded village level estimate file

Currently only 2 crops cotton and soybean are provided for download and this provides data of aggregated crop water balance for selected villages from June start upto current date, with 2 days lag as on dashboard for current agriculture year.

The data for both AWS and Village Level Estimate can be downloaded by clicking icon on left hand side after selecting appropriate circle/village respectively.

Following Figure 21 shows Village level estimate download from CA login.

Figure 21 Village Level estimate data download in CA login for villages under his purview

(48)

5. Graphs

Graphs are included within role-based access for – 1. AWS

2. Village Level Estimate

The Graphs section is available below Data Download section.

In PMU login, the AWS graphs are available in the form of rainfall histogram for selected date and district.

Figure 22 AWS Graph section

There are two histograms –

First Histogram shows rainfall for selected date for all circles in selected district at PMU level.

Second histogram graph shows last week total rainfall (from selected date to 7 days back) for all circles in that district.

(49)

Figure 23 PMU Login District level graph of ‘Rainfall on date’ showing data for all circles in district

Figure 24 PMU Login District level graph of ‘Rainfall last week’ showing data for all circles in district

These graphs show data for assigned admin region for each role. So, DSAO will be able to see data only for circles in his district and Cluster Assistant will be able to see data only for the circles linked to villages in his cluster. Similar will be the case for Agriculture Assistant.

(50)

Figure 25 AWS for CA Login showing only one circle present in CA’s cluster in the dropdown

Figure 26 CA Login Cluster level graph of ‘Rainfall on date’ showing data for all circles in cluster

(51)

Figure 27 CA Login cluster level graph of ‘Rainfall last week’ showing data for all circles in cluster

2. Village level estimate Graphs

Figure 28 PMU login – selection of district/taluka/village/crop for village level estimate graph

In PMU login the user has to select a single District/taluka/village/crop to create a crop water balance graph. It gets created after the user clicks on ‘Get Graph’.

Features

For all roles, this crop water balance graph (Village level estimate) is created for current agricultural year starting from June 1 sttill date or 30 thOctober whichever is earlier. This is also available for only 2 crops – cotton and soybean. The crop water balance is depicted through 2 graphs as shown below.

(52)

Figure 29 AET-PET-Rainfall graph for Alewadi village cotton crop

Legend: The legend is shown at bottom with ‘color - parameter name (unit) – Y axis (pri/sec)’ on which the parameter value should be seen. (eg - (aet(in mm)/pri) )

Eg – in figure 29, AET and PET are shown on primary Y axis denoted by ‘pri’ in legend (aet(in mm)/pri) and rainfall is shown on secondary Y axis, denoted by ‘sec’ in legend.

1. primary Y axis – left side 2. secondary y axis – right side

Different axis are used as range of values are different for different parameters as explained in Farm app manual.

Figure 30 Cumulative ground water (GW) – Cumulative runoff – rainfall graph

References

Related documents

o The new item can be added using assignment operator or The value of existing items can be updated using assignment operator. o If the key is already present, value gets updated,

To consolidate the available information on the trophic level of fishes, we have taken advantage of the available trophic level values for finfish species and supplemented them

Sy.No.1043. This kunta appears to be a natural formation and is marked in the village map on the western side of the proposed Cement Plant. As per revenue records, there is

Section 2 (a) defines, Community Forest Resource means customary common forest land within the traditional or customary boundaries of the village or seasonal use of landscape in

That apart, the Board in the said letter has given such direction to install separate meter for all the pollution control equipments and to store the

Kasaba Tarale is a village situated is Radhanagari Taluka. The village had no contact with the city. It was but inevitable for the people of Tarale to come to Kolhapur for doing

Interactions between convective mo- tions and the magnetic field of a sunspot could easily be appreciated from observations shown in Figure 2: in regions outside of the sunspot,

•The EU extracts instructions from the top of the queue in the BIU, decodes them, generates operands if necessary, passes them to the BIU and requests it to perform the read or