Skip to content

OneOfN-2.11-2.3.1-2.4.0

Pre-release
Pre-release
Compare
Choose a tag to compare
@derrickoswald derrickoswald released this 07 Nov 08:42
· 1235 commits to master since this release

Per notes in NE-285 this version of OneOfN removes equipment inside substations:

To remove the unwanted Substation elements from the GridLAB-D file:

  • revert the code to the previous version (roll back changes to the CIMTopologyProcessor)
  • after reading in the CIM file:
    • determine feeders as medium voltage (1e3 < V < 50e3) Connector in substations as PSRType == "PSRType_Substation"
    • create an RDD of external ACLineSegment as PSRType == "PSRType_Underground" || PSRType == "PSRType_Overhead"
    • create an RDD of elements in substations as PSRType == "PSRType_Substation" || (PSRType == "PSRType_Unknown" && EquipmentContainer != null)
    • create an RDD of EquipmentContainer id values for these elements
    • delete all CIM elements where EquipmentContainer is in that RDD but excluding the feeder objects from the first step and cables from the second step
  • execute the CIMNetworkTopologyProcessor function to create TopologicalNode and TopologicalIsland
  • proceed as before to extract the feeder GridLAB-D models - but now with the reduced CIM
OneOfN 2.11-2.3.1-2.4.0
Usage: OneOfN [options] [<CIM> <CIM> ...]

Creates GridLAB-D .glm models for all medium voltage (N5 network) feeder service areas for one-of-N analysis.

  --help                   prints this usage text
  --version                Scala: 2.11, Spark: 2.3.1, OneOfN: 2.4.0
  --quiet                  suppress informational messages [false]
  --master MASTER_URL      local[*], spark://host:port, mesos://host:port, yarn []
  --opts k1=v1,k2=v2       Spark options [spark.graphx.pregel.checkpointInterval=8,spark.serializer=org.apache.spark.serializer.KryoSerializer]
  --storage_level <value>  storage level for RDD serialization [MEMORY_AND_DISK_SER]
  --deduplicate            de-duplicate input (striped) files [false]
  --three                  use three phase computations [false]
  --tbase <value>          temperature assumed in CIM file (°C) [20.0000]
  --temp <value>           low temperature for maximum fault (°C) [20.0000]
  --logging <value>        log level, one of ALL,DEBUG,ERROR,FATAL,INFO,OFF,TRACE,WARN [OFF]
  --checkpoint <dir>       checkpoint directory on HDFS, e.g. hdfs://... []
  --workdir <dir>          shared directory (HDFS or NFS share) with scheme (hdfs:// or file:/) for work files []
  <CIM> <CIM> ...          CIM rdf files to process


$ spark-submit --master spark://sandbox:7077 --conf spark.driver.memory=2g --conf spark.executor.memory=2g /opt/code/OneOfN-2.11-2.3.1-2.4.0-jar-with-dependencies.jar --logging INFO --checkpoint hdfs://sandbox:8020/checkpoint hdfs://sandbox:8020/bkw_cim_export_azimi_with_topology.rdf
$ hdfs dfs -get -p /simulation
$ chmod --recursive 775 simulation
$ cd simulation
$ for filename in STA*; do echo $filename; pushd $filename/input_data > /dev/null; ./gen; cd ..; gridlabd $filename; popd > /dev/null; done;