Se você prefere baixar um arquivo único com todas as referências do LARCC, você pode encontrá-lo neste link. Você também pode acompanhar novas publicações via RSS.
Adicionalmente, você também pode encontrar as publicações no perfil do LARCC no Google Scholar .
2019 |
Fischer, Gabriel Souto; da Righi, Rodrigo Rosa; da Costa, Cristiano André; Galante, Guilherme; Griebler, Dalvan Towards Evaluating Proactive and Reactive Approaches on Reorganizing Human Resources in IoT-Based Smart Hospitals Journal Article doi Sensors, 19 (17), pp. 3800, 2019. Resumo | Links | BibTeX | Tags: IoT @article{FISHER:Elasticity-Hospital:SENSORS:19, title = {Towards Evaluating Proactive and Reactive Approaches on Reorganizing Human Resources in IoT-Based Smart Hospitals}, author = {Gabriel Souto Fischer and Rodrigo Rosa da Righi and Cristiano André da Costa and Guilherme Galante and Dalvan Griebler}, url = {https://doi.org/10.3390/s19173800}, doi = {10.3390/s19173800}, year = {2019}, date = {2019-09-01}, journal = {Sensors}, volume = {19}, number = {17}, pages = {3800}, publisher = {MDPI}, abstract = {Hospitals play an important role on ensuring a proper treatment of human health. One of the problems to be faced is the increasingly overcrowded patients care queues, who end up waiting for longer times without proper treatment to their health problems. The allocation of health professionals in hospital environments is not able to adapt to the demands of patients. There are times when underused rooms have idle professionals, and overused rooms have fewer professionals than necessary. Previous works have not solved this problem since they focus on understanding the evolution of doctor supply and patient demand, as to better adjust one to the other. However, they have not proposed concrete solutions for that regarding techniques for better allocating available human resources. Moreover, elasticity is one of the most important features of cloud computing, referring to the ability to add or remove resources according to the needs of the application or service. Based on this background, we introduce Elastic allocation of human resources in Healthcare environments (ElHealth) an IoT-focused model able to monitor patient usage of hospital rooms and adapt these rooms for patients demand. Using reactive and proactive elasticity approaches, ElHealth identifies when a room will have a demand that exceeds the capacity of care, and proposes actions to move human resources to adapt to patient demand. Our main contribution is the definition of Human Resources IoT-based Elasticity (i.e., an extension of the concept of resource elasticity in Cloud Computing to manage the use of human resources in a healthcare environment, where health professionals are allocated and deallocated according to patient demand). Another contribution is a cost–benefit analysis for the use of reactive and predictive strategies on human resources reorganization. ElHealth was simulated on a hospital environment using data from a Brazilian polyclinic, and obtained promising results, decreasing the waiting time by up to 96.4% and 96.73% in reactive and proactive approaches, respectively.}, keywords = {IoT}, pubstate = {published}, tppubtype = {article} } Hospitals play an important role on ensuring a proper treatment of human health. One of the problems to be faced is the increasingly overcrowded patients care queues, who end up waiting for longer times without proper treatment to their health problems. The allocation of health professionals in hospital environments is not able to adapt to the demands of patients. There are times when underused rooms have idle professionals, and overused rooms have fewer professionals than necessary. Previous works have not solved this problem since they focus on understanding the evolution of doctor supply and patient demand, as to better adjust one to the other. However, they have not proposed concrete solutions for that regarding techniques for better allocating available human resources. Moreover, elasticity is one of the most important features of cloud computing, referring to the ability to add or remove resources according to the needs of the application or service. Based on this background, we introduce Elastic allocation of human resources in Healthcare environments (ElHealth) an IoT-focused model able to monitor patient usage of hospital rooms and adapt these rooms for patients demand. Using reactive and proactive elasticity approaches, ElHealth identifies when a room will have a demand that exceeds the capacity of care, and proposes actions to move human resources to adapt to patient demand. Our main contribution is the definition of Human Resources IoT-based Elasticity (i.e., an extension of the concept of resource elasticity in Cloud Computing to manage the use of human resources in a healthcare environment, where health professionals are allocated and deallocated according to patient demand). Another contribution is a cost–benefit analysis for the use of reactive and predictive strategies on human resources reorganization. ElHealth was simulated on a hospital environment using data from a Brazilian polyclinic, and obtained promising results, decreasing the waiting time by up to 96.4% and 96.73% in reactive and proactive approaches, respectively. |
Rockenbach, Dinei A; Griebler, Dalvan; Danelutto, Marco; Fernandes, Luiz Gustavo High-Level Stream Parallelism Abstractions with SPar Targeting GPUs Inproceedings doi Parallel Computing is Everywhere, Proceedings of the International Conference on Parallel Computing (ParCo), pp. 543-552, IOS Press, Prague, Czech Republic, 2019. Resumo | Links | BibTeX | Tags: GPGPU, Stream processing @inproceedings{ROCKENBACH:PARCO:19, title = {High-Level Stream Parallelism Abstractions with SPar Targeting GPUs}, author = {Dinei A Rockenbach and Dalvan Griebler and Marco Danelutto and Luiz Gustavo Fernandes}, url = {https://doi.org/10.3233/APC200083}, doi = {10.3233/APC200083}, year = {2019}, date = {2019-09-01}, booktitle = {Parallel Computing is Everywhere, Proceedings of the International Conference on Parallel Computing (ParCo)}, volume = {36}, pages = {543-552}, publisher = {IOS Press}, address = {Prague, Czech Republic}, series = {ParCo'19}, abstract = {The combined exploitation of stream and data parallelism is demonstrating encouraging performance results in the literature for heterogeneous architectures, which are present on every computer systems today. However, provide parallel software efficiently targeting those architectures requires significant programming effort and expertise. The SPar domain-specific language already represents a solution to this problem providing proven high-level programming abstractions for multi-core architectures. In this paper, we enrich the SPar language adding support for GPUs. New transformation rules are designed for generating parallel code using stream and data parallel patterns. Our experiments revealed that these transformations rules are able to improve performance while the high-level programming abstractions are maintained.}, keywords = {GPGPU, Stream processing}, pubstate = {published}, tppubtype = {inproceedings} } The combined exploitation of stream and data parallelism is demonstrating encouraging performance results in the literature for heterogeneous architectures, which are present on every computer systems today. However, provide parallel software efficiently targeting those architectures requires significant programming effort and expertise. The SPar domain-specific language already represents a solution to this problem providing proven high-level programming abstractions for multi-core architectures. In this paper, we enrich the SPar language adding support for GPUs. New transformation rules are designed for generating parallel code using stream and data parallel patterns. Our experiments revealed that these transformations rules are able to improve performance while the high-level programming abstractions are maintained. |
Vogel, Adriano; Griebler, Dalvan; Danelutto, Marco; Fernandes, Luiz Gustavo Seamless Parallelism Management for Multi-core Stream Processing Inproceedings doi Advances in Parallel Computing, Proceedings of the International Conference on Parallel Computing (ParCo), pp. 533-542, IOS Press, Prague, Czech Republic, 2019. Resumo | Links | BibTeX | Tags: Stream processing @inproceedings{VOGEL:PARCO:19, title = {Seamless Parallelism Management for Multi-core Stream Processing}, author = {Adriano Vogel and Dalvan Griebler and Marco Danelutto and Luiz Gustavo Fernandes}, url = {https://doi.org/10.3233/APC200082}, doi = {10.3233/APC200082}, year = {2019}, date = {2019-09-01}, booktitle = {Advances in Parallel Computing, Proceedings of the International Conference on Parallel Computing (ParCo)}, volume = {36}, pages = {533-542}, publisher = {IOS Press}, address = {Prague, Czech Republic}, series = {ParCo'19}, abstract = {Video streaming applications have critical performance requirements for dealing with fluctuating workloads and providing results in real-time. As a consequence, the majority of these applications demand parallelism for delivering quality of service to users. Although high-level and structured parallel programming aims at facilitating parallelism exploitation, there are still several issues to be addressed for increasing/improving existing parallel programming abstractions. In this paper, we aim at employing self-adaptivity for stream processing in order to seamlessly manage the application parallelism configurations at run-time, where a new strategy alleviates from application programmers the need to set time-consuming and error-prone parallelism parameters. The new strategy was implemented and validated on SPar. The results have shown that the proposed solution increases the level of abstraction and achieved a competitive performance.}, keywords = {Stream processing}, pubstate = {published}, tppubtype = {inproceedings} } Video streaming applications have critical performance requirements for dealing with fluctuating workloads and providing results in real-time. As a consequence, the majority of these applications demand parallelism for delivering quality of service to users. Although high-level and structured parallel programming aims at facilitating parallelism exploitation, there are still several issues to be addressed for increasing/improving existing parallel programming abstractions. In this paper, we aim at employing self-adaptivity for stream processing in order to seamlessly manage the application parallelism configurations at run-time, where a new strategy alleviates from application programmers the need to set time-consuming and error-prone parallelism parameters. The new strategy was implemented and validated on SPar. The results have shown that the proposed solution increases the level of abstraction and achieved a competitive performance. |
Teixeira, Djalma; Vogel, Adriano; Griebler, Dalvan Proposta de Monitoramento e Gerenciamento Inteligente de Temperatura em Datacenters Inproceedings 16th Escola Regional de Redes de Computadores (ERRC), pp. 1-8, Sociedade Brasileira de Computação, Alegrete, Brazil, 2019. Resumo | Links | BibTeX | Tags: @inproceedings{larcc:smart_datacenter_temperatura:ERRC:19, title = {Proposta de Monitoramento e Gerenciamento Inteligente de Temperatura em Datacenters}, author = {Djalma Teixeira and Adriano Vogel and Dalvan Griebler}, url = {https://sol.sbc.org.br/index.php/errc/article/view/9209/9112}, year = {2019}, date = {2019-09-01}, booktitle = {16th Escola Regional de Redes de Computadores (ERRC)}, pages = {1-8}, publisher = {Sociedade Brasileira de Computação}, address = {Alegrete, Brazil}, series = {ERRC'19}, abstract = {O aumento constante do crescimento e desenvolvimento das infraestruturas computacionais, vem impulsionando uma demanda cada vez maior por monitoramento e gerenciamento inteligente de datacenters. Em um ambiente gerenciado autonomicamente os equipamentos são controlados por meio de ações autonômicas, que são executadas sob determinadas condições sem a necessidade de intervenção humana. O objetivo deste trabalho é propor um modelo conceitual de monitoramento e gerenciamento inteligente para temperatura, que pode ser aplicado tanto em estruturas básicas quanto complexas e adaptado a heterogeneidade dos datacenters atuais.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } O aumento constante do crescimento e desenvolvimento das infraestruturas computacionais, vem impulsionando uma demanda cada vez maior por monitoramento e gerenciamento inteligente de datacenters. Em um ambiente gerenciado autonomicamente os equipamentos são controlados por meio de ações autonômicas, que são executadas sob determinadas condições sem a necessidade de intervenção humana. O objetivo deste trabalho é propor um modelo conceitual de monitoramento e gerenciamento inteligente para temperatura, que pode ser aplicado tanto em estruturas básicas quanto complexas e adaptado a heterogeneidade dos datacenters atuais. |
Vogel, Adriano; Griebler, Dalvan; Danelutto, Marco; Fernandes, Luiz Gustavo Minimizing Self-Adaptation Overhead in Parallel Stream Processing for Multi-Cores Inproceedings doi Euro-Par 2019: Parallel Processing Workshops, pp. 12, Springer, Göttingen, Germany, 2019. Resumo | Links | BibTeX | Tags: Self-adaptation, Stream processing @inproceedings{VOGEL:adaptive-overhead:AutoDaSP:19, title = {Minimizing Self-Adaptation Overhead in Parallel Stream Processing for Multi-Cores}, author = {Adriano Vogel and Dalvan Griebler and Marco Danelutto and Luiz Gustavo Fernandes}, url = {https://doi.org/10.1007/978-3-030-48340-1_3}, doi = {10.1007/978-3-030-48340-1_3}, year = {2019}, date = {2019-08-01}, booktitle = {Euro-Par 2019: Parallel Processing Workshops}, volume = {11997}, pages = {12}, publisher = {Springer}, address = {Göttingen, Germany}, series = {Lecture Notes in Computer Science}, abstract = {Stream processing paradigm is present in several applications that apply computations over continuous data flowing in the form of streams (e.g., video feeds, image, and data analytics). Employing self-adaptivity to stream processing applications can provide higher-level programming abstractions and autonomic resource management. However, there are cases where the performance is suboptimal. In this paper, the goal is to optimize parallelism adaptations in terms of stability and accuracy, which can improve the performance of parallel stream processing applications. Therefore, we present a new optimized self-adaptive strategy that is experimentally evaluated. The proposed solution provided high-level programming abstractions, reduced the adaptation overhead, and achieved a competitive performance with the best static executions.}, keywords = {Self-adaptation, Stream processing}, pubstate = {published}, tppubtype = {inproceedings} } Stream processing paradigm is present in several applications that apply computations over continuous data flowing in the form of streams (e.g., video feeds, image, and data analytics). Employing self-adaptivity to stream processing applications can provide higher-level programming abstractions and autonomic resource management. However, there are cases where the performance is suboptimal. In this paper, the goal is to optimize parallelism adaptations in terms of stability and accuracy, which can improve the performance of parallel stream processing applications. Therefore, we present a new optimized self-adaptive strategy that is experimentally evaluated. The proposed solution provided high-level programming abstractions, reduced the adaptation overhead, and achieved a competitive performance with the best static executions. |
Maliszewski, Anderson M; Vogel, Adriano; Griebler, Dalvan; Roloff, Eduardo; Fernandes, Luz Gustavo; Navaux, Philippe O A Minimizing Communication Overheads in Container-based Clouds for HPC Applications Inproceedings doi IEEE Symposium on Computers and Communications (ISCC), pp. 1-6, IEEE, Barcelona, Spain, 2019. Resumo | Links | BibTeX | Tags: Cloud computing @inproceedings{larcc:communication_overhead_lxd:ISCC:19, title = {Minimizing Communication Overheads in Container-based Clouds for HPC Applications}, author = {Anderson M Maliszewski and Adriano Vogel and Dalvan Griebler and Eduardo Roloff and Luz Gustavo Fernandes and Philippe O A Navaux}, url = {https://doi.org/10.1109/ISCC47284.2019.8969716}, doi = {10.1109/ISCC47284.2019.8969716}, year = {2019}, date = {2019-07-01}, booktitle = {IEEE Symposium on Computers and Communications (ISCC)}, pages = {1-6}, publisher = {IEEE}, address = {Barcelona, Spain}, series = {ISCC'19}, abstract = {Although the industry has embraced the cloud computing model, there are still significant challenges to be addressed concerning the quality of cloud services. Network-intensive applications may not scale in the cloud due to the sharing of the network infrastructure. In the literature, performance evaluation studies are showing that the network tends to limit the scalability and performance of HPC applications. Therefore, we proposed the aggregation of Network Interface Cards (NICs) in a ready-to-use integration with the OpenNebula cloud manager using Linux containers. We perform a set of experiments using a network microbenchmark to get specific network performance metrics and NAS parallel benchmarks to analyze the performance impact on HPC applications. Our results highlight that the implementation of NIC aggregation improves network performance in terms of throughput and latency. Moreover, HPC applications have different patterns of behavior when using our approach, which depends on communication and the amount of data transferring. While network-intensive applications increased the performance up to 38%, other applications with aggregated NICs maintained the same performance or presented slightly worse performance.}, keywords = {Cloud computing}, pubstate = {published}, tppubtype = {inproceedings} } Although the industry has embraced the cloud computing model, there are still significant challenges to be addressed concerning the quality of cloud services. Network-intensive applications may not scale in the cloud due to the sharing of the network infrastructure. In the literature, performance evaluation studies are showing that the network tends to limit the scalability and performance of HPC applications. Therefore, we proposed the aggregation of Network Interface Cards (NICs) in a ready-to-use integration with the OpenNebula cloud manager using Linux containers. We perform a set of experiments using a network microbenchmark to get specific network performance metrics and NAS parallel benchmarks to analyze the performance impact on HPC applications. Our results highlight that the implementation of NIC aggregation improves network performance in terms of throughput and latency. Moreover, HPC applications have different patterns of behavior when using our approach, which depends on communication and the amount of data transferring. While network-intensive applications increased the performance up to 38%, other applications with aggregated NICs maintained the same performance or presented slightly worse performance. |
Griebler, Dalvan; Vogel, Adriano; De Sensi, Daniele ; Danelutto, Marco; Fernandes, Luiz Gustavo Simplifying and implementing service level objectives for stream parallelism Journal Article doi Journal of Supercomputing, 76 , pp. 4603-4628, 2019, ISSN: 0920-8542. Resumo | Links | BibTeX | Tags: Self-adaptation, Stream processing @article{GRIEBLER:JS:19, title = {Simplifying and implementing service level objectives for stream parallelism}, author = {Dalvan Griebler and Adriano Vogel and Daniele {De Sensi} and Marco Danelutto and Luiz Gustavo Fernandes}, url = {https://doi.org/10.1007/s11227-019-02914-6}, doi = {10.1007/s11227-019-02914-6}, issn = {0920-8542}, year = {2019}, date = {2019-06-01}, journal = {Journal of Supercomputing}, volume = {76}, pages = {4603-4628}, publisher = {Springer}, abstract = {An increasing attention has been given to provide service level objectives (SLOs) in stream processing applications due to the performance and energy requirements, and because of the need to impose limits in terms of resource usage while improving the system utilization. Since the current and next-generation computing systems are intrinsically offering parallel architectures, the software has to naturally exploit the architecture parallelism. Implement and meet SLOs on existing applications is not a trivial task for application programmers, since the software development process, besides the parallelism exploitation, requires the implementation of autonomic algorithms or strategies. This is a system-oriented programming approach and requires the management of multiple knobs and sensors (e.g., the number of threads to use, the clock frequency of the cores, etc.) so that the system can self-adapt at runtime. In this work, we introduce a new and simpler way to define SLO in the application’s source code, by abstracting from the programmer all the details relative to self-adaptive system implementation. The application programmer specifies which parts of the code to parallelize and the related SLOs that should be enforced. To reach this goal, source-to-source code transformation rules are implemented in our compiler, which automatically generates self-adaptive strategies to enforce, at runtime, the user-expressed objectives. The experiments highlighted promising results with simpler, effective, and efficient SLO implementations for real-world applications.}, keywords = {Self-adaptation, Stream processing}, pubstate = {published}, tppubtype = {article} } An increasing attention has been given to provide service level objectives (SLOs) in stream processing applications due to the performance and energy requirements, and because of the need to impose limits in terms of resource usage while improving the system utilization. Since the current and next-generation computing systems are intrinsically offering parallel architectures, the software has to naturally exploit the architecture parallelism. Implement and meet SLOs on existing applications is not a trivial task for application programmers, since the software development process, besides the parallelism exploitation, requires the implementation of autonomic algorithms or strategies. This is a system-oriented programming approach and requires the management of multiple knobs and sensors (e.g., the number of threads to use, the clock frequency of the cores, etc.) so that the system can self-adapt at runtime. In this work, we introduce a new and simpler way to define SLO in the application’s source code, by abstracting from the programmer all the details relative to self-adaptive system implementation. The application programmer specifies which parts of the code to parallelize and the related SLOs that should be enforced. To reach this goal, source-to-source code transformation rules are implemented in our compiler, which automatically generates self-adaptive strategies to enforce, at runtime, the user-expressed objectives. The experiments highlighted promising results with simpler, effective, and efficient SLO implementations for real-world applications. |
Scheer, Claudio; Guder, Larissa Deep Learning in Agriculture: A Systematic Literature Review Undergraduate Thesis Forthcoming Undergraduate Thesis, Forthcoming. Resumo | BibTeX | Tags: Agriculture, Deep learning, Literature review @misc{larcc:claudio_larissa:TCC:19, title = {Deep Learning in Agriculture: A Systematic Literature Review}, author = {Claudio Scheer and Larissa Guder}, year = {2019}, date = {2019-06-01}, address = {Três de Maio, RS, Brazil}, school = {Sociedade Educacional Três de Maio (SETREM)}, abstract = {With the growth of computational power, the deep learning algorithms have achieved remarkable results in several areas. Agriculture is one of the areas that are using these algorithms for the most varied domains. Therefore, this work presents a systematic literature review to consolidate the state-of-the-art about the use of deep learning applied to agricultural challenges. Papers published between January 2012 and April 2019 were considered. From the 819 papers found, 230 papers were classified. We evaluated the deep learning techniques used, crops covered, data sets used, deep learning and agriculture challenges, and among other important insights. The results have shown that deep learning is successfully used for several crops in agriculture. In the livestock branch, for example, most of the works achieved an accuracy above 95%. In total, 47.2% of the papers achieved an accuracy above 95%. Consequently, there is a lot of work to be done in the area of deep learning for agriculture. Our analysis are very important to support new research that seeks to apply deep learning in agriculture and highlight the research gaps.}, howpublished = {Undergraduate Thesis}, keywords = {Agriculture, Deep learning, Literature review}, pubstate = {forthcoming}, tppubtype = {misc} } With the growth of computational power, the deep learning algorithms have achieved remarkable results in several areas. Agriculture is one of the areas that are using these algorithms for the most varied domains. Therefore, this work presents a systematic literature review to consolidate the state-of-the-art about the use of deep learning applied to agricultural challenges. Papers published between January 2012 and April 2019 were considered. From the 819 papers found, 230 papers were classified. We evaluated the deep learning techniques used, crops covered, data sets used, deep learning and agriculture challenges, and among other important insights. The results have shown that deep learning is successfully used for several crops in agriculture. In the livestock branch, for example, most of the works achieved an accuracy above 95%. In total, 47.2% of the papers achieved an accuracy above 95%. Consequently, there is a lot of work to be done in the area of deep learning for agriculture. Our analysis are very important to support new research that seeks to apply deep learning in agriculture and highlight the research gaps. |
Allebrandt, Alisson; Schimidt, Diego Henrique Simplificando a Interpretação de Análise de Solo com Inteligência Artificial Undergraduate Thesis Undergraduate Thesis, 2019. Resumo | Links | BibTeX | Tags: Agriculture, Deep learning @misc{larcc:alisson_diego:TCC:19, title = {Simplificando a Interpretação de Análise de Solo com Inteligência Artificial}, author = {Alisson Allebrandt and Diego Henrique Schimidt}, url = {https://larcc.setrem.com.br/wp-content/uploads/2020/08/TCC_SETREM__Alisson_e_Diego_.pdf}, year = {2019}, date = {2019-06-01}, address = {Três de Maio, RS, Brazil}, school = {Sociedade Educacional Três de Maio (SETREM)}, abstract = {One of the aspects that interferes in a good production and agricultural crop is the soil. Its conservation through the correct application of nutrients and fertilization is of paramount importance. Based on this scenario and with the accelerated growth of agricultural technology, an application capable of interpreting the soil analyzes that are generated by soil laboratories resulting from a sample of land collected by the rural producer is proposed. After the analysis, the idea is to suggest the appropriate amount of fertilizers and agricultural nutrients that the producer should apply in his crop. Currently this recommendation process is still manually done by agronomists or desktop software that uses liming and fertilization manuals as the basis for recommendation calculations. This work aims to use machine learning technology, in which more than 30,000 soil analysis records were extracted from the SETREM soil laboratory. Based on the study and analysis of these data, a solution was proposed aimed at creating a training model and a generic way to receive soil analyzes, normalize them, interpret, generate recommendations, and save to a single database so that this data can be used for BI and mining in the future. Therefore, it was developed a mobile application capable of interpreting a photo taken from the soil analysis and then transform the values of the chemical elements present in the image into digital information, which can be consulted and shared in a faster way among the interested people, in addition to starting with the process of feeding a database with information on soil analysis. We obtained results regarding the current need to still use manuals of liming and fertilization as well as the application of artificial intelligence in front of this area. Also, we studied tools of image processing and interpretation of characters with the use of Machine Learning, such as Tesseract OCR and Google Vision, which resulted in a comparison of the two interpretation tools tested.}, howpublished = {Undergraduate Thesis}, keywords = {Agriculture, Deep learning}, pubstate = {published}, tppubtype = {misc} } One of the aspects that interferes in a good production and agricultural crop is the soil. Its conservation through the correct application of nutrients and fertilization is of paramount importance. Based on this scenario and with the accelerated growth of agricultural technology, an application capable of interpreting the soil analyzes that are generated by soil laboratories resulting from a sample of land collected by the rural producer is proposed. After the analysis, the idea is to suggest the appropriate amount of fertilizers and agricultural nutrients that the producer should apply in his crop. Currently this recommendation process is still manually done by agronomists or desktop software that uses liming and fertilization manuals as the basis for recommendation calculations. This work aims to use machine learning technology, in which more than 30,000 soil analysis records were extracted from the SETREM soil laboratory. Based on the study and analysis of these data, a solution was proposed aimed at creating a training model and a generic way to receive soil analyzes, normalize them, interpret, generate recommendations, and save to a single database so that this data can be used for BI and mining in the future. Therefore, it was developed a mobile application capable of interpreting a photo taken from the soil analysis and then transform the values of the chemical elements present in the image into digital information, which can be consulted and shared in a faster way among the interested people, in addition to starting with the process of feeding a database with information on soil analysis. We obtained results regarding the current need to still use manuals of liming and fertilization as well as the application of artificial intelligence in front of this area. Also, we studied tools of image processing and interpretation of characters with the use of Machine Learning, such as Tesseract OCR and Google Vision, which resulted in a comparison of the two interpretation tools tested. |
Teixeira, Djalma Rafael Modelo Conceitual de Monitoramento e Gerenciamento para Smart Datacenters Undergraduate Thesis Undergraduate Thesis, 2019. Resumo | Links | BibTeX | Tags: Cloud computing, IoT @misc{larcc:djalma:TCC:19, title = {Modelo Conceitual de Monitoramento e Gerenciamento para Smart Datacenters}, author = {Djalma Rafael Teixeira}, url = {https://larcc.setrem.com.br/wp-content/uploads/2020/08/TCC_SETREM__Djalma_.pdf}, year = {2019}, date = {2019-06-01}, address = {Três de Maio, RS, Brazil}, school = {Sociedade Educacional Três de Maio (SETREM)}, abstract = {The demand for smart datacenters has been increasing considerably due to the complexity of managing the current infrastructures, which is due to the increasing need for computing resources within organizations. The present work aims to propose a model of intelligent management and monitoring for datacenters and to test its effectiveness through the partial implementation of the same. A complete survey of the physical infrastructure, logical network and services in the LARCC IT infrastructure was carried out. By means of the results obtained, the classification of the datacenter of the laboratory was made according to the requirements of ANSI TIA 942. Through the analysis and research carried out by related works, a conceptual model for monitoring and management was elaborated intelligent for computer infrastructures, which was divided into five major areas: air conditioning, energy, computing, network and security. We also defined the events that affect these elements, how to monitor them and how to manage them based on the autonomous computing approach. With this, the models were implemented regarding temperature and energy, which uses reactive actions to address and contain consequences of overheating and energy loss. To implement this flow of actions was used the tool Zabbix, and its function of executing remote commands for practical application of the model. It is concluded that the proposed conceptual model is more effective in the containment of critical events that may affect the infrastructure, these results were tested and validated in practice for the elements of temperature and energy.}, howpublished = {Undergraduate Thesis}, keywords = {Cloud computing, IoT}, pubstate = {published}, tppubtype = {misc} } The demand for smart datacenters has been increasing considerably due to the complexity of managing the current infrastructures, which is due to the increasing need for computing resources within organizations. The present work aims to propose a model of intelligent management and monitoring for datacenters and to test its effectiveness through the partial implementation of the same. A complete survey of the physical infrastructure, logical network and services in the LARCC IT infrastructure was carried out. By means of the results obtained, the classification of the datacenter of the laboratory was made according to the requirements of ANSI TIA 942. Through the analysis and research carried out by related works, a conceptual model for monitoring and management was elaborated intelligent for computer infrastructures, which was divided into five major areas: air conditioning, energy, computing, network and security. We also defined the events that affect these elements, how to monitor them and how to manage them based on the autonomous computing approach. With this, the models were implemented regarding temperature and energy, which uses reactive actions to address and contain consequences of overheating and energy loss. To implement this flow of actions was used the tool Zabbix, and its function of executing remote commands for practical application of the model. It is concluded that the proposed conceptual model is more effective in the containment of critical events that may affect the infrastructure, these results were tested and validated in practice for the elements of temperature and energy. |
Rockenbach, Dinei A; Stein, Charles Michael; Griebler, Dalvan; Mencagli, Gabriele; Torquati, Massimo; Danelutto, Marco; Fernandes, Luiz Gustavo Stream Processing on Multi-cores with GPUs: Parallel Programming Models' Challenges Inproceedings doi International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 834-841, IEEE, Rio de Janeiro, Brazil, 2019. Resumo | Links | BibTeX | Tags: GPGPU, Stream processing @inproceedings{ROCKENBACH:stream-multigpus:IPDPSW:19, title = {Stream Processing on Multi-cores with GPUs: Parallel Programming Models' Challenges}, author = {Dinei A Rockenbach and Charles Michael Stein and Dalvan Griebler and Gabriele Mencagli and Massimo Torquati and Marco Danelutto and Luiz Gustavo Fernandes}, url = {https://doi.org/10.1109/IPDPSW.2019.00137}, doi = {10.1109/IPDPSW.2019.00137}, year = {2019}, date = {2019-05-01}, booktitle = {International Parallel and Distributed Processing Symposium Workshops (IPDPSW)}, pages = {834-841}, publisher = {IEEE}, address = {Rio de Janeiro, Brazil}, series = {IPDPSW'19}, abstract = {The stream processing paradigm is used in several scientific and enterprise applications in order to continuously compute results out of data items coming from data sources such as sensors. The full exploitation of the potential parallelism offered by current heterogeneous multi-cores equipped with one or more GPUs is still a challenge in the context of stream processing applications. In this work, our main goal is to present the parallel programming challenges that the programmer has to face when exploiting CPUs and GPUs' parallelism at the same time using traditional programming models. We highlight the parallelization methodology in two use-cases (the Mandelbrot Streaming benchmark and the PARSEC's Dedup application) to demonstrate the issues and benefits of using heterogeneous parallel hardware. The experiments conducted demonstrate how a high-level parallel programming model targeting stream processing like the one offered by SPar can be used to reduce the programming effort still offering a good level of performance if compared with state-of-the-art programming models.}, keywords = {GPGPU, Stream processing}, pubstate = {published}, tppubtype = {inproceedings} } The stream processing paradigm is used in several scientific and enterprise applications in order to continuously compute results out of data items coming from data sources such as sensors. The full exploitation of the potential parallelism offered by current heterogeneous multi-cores equipped with one or more GPUs is still a challenge in the context of stream processing applications. In this work, our main goal is to present the parallel programming challenges that the programmer has to face when exploiting CPUs and GPUs' parallelism at the same time using traditional programming models. We highlight the parallelization methodology in two use-cases (the Mandelbrot Streaming benchmark and the PARSEC's Dedup application) to demonstrate the issues and benefits of using heterogeneous parallel hardware. The experiments conducted demonstrate how a high-level parallel programming model targeting stream processing like the one offered by SPar can be used to reduce the programming effort still offering a good level of performance if compared with state-of-the-art programming models. |
Stein, Charles M; Rockenbach, Dinei A; Griebler, Dalvan Paralelização do Dedup para Sistemas Multi-core com GPUs Inproceedings 19th Escola Regional de Alto Desempenho da Região Sul (ERAD/RS), Sociedade Brasileira de Computação, Três de Maio, RS, Brazil, 2019. Resumo | Links | BibTeX | Tags: GPGPU @inproceedings{larcc:paralelizacao_multicore_GPU:ERAD:19, title = {Paralelização do Dedup para Sistemas Multi-core com GPUs}, author = {Charles M Stein and Dinei A Rockenbach and Dalvan Griebler}, url = {http://larcc.setrem.com.br/wp-content/uploads/2019/04/192087.pdf}, year = {2019}, date = {2019-04-01}, booktitle = {19th Escola Regional de Alto Desempenho da Região Sul (ERAD/RS)}, publisher = {Sociedade Brasileira de Computação}, address = {Três de Maio, RS, Brazil}, abstract = {O maior volume de dados gerado, trafegado e processado aumentaa demanda por mais poder de processamento e por algoritmos de compressãoeficientes. Este trabalho tem como objetivo explorar o paralelismo de streampara arquiteturas multi-core com GPUs na aplicação Dedup, usando SPar comCUDA e OpenCL. Apesar do desempenho não ser o esperado, o artigo contribuicom uma análise detalhada dos resultados e sugestões futuras de melhorias.}, keywords = {GPGPU}, pubstate = {published}, tppubtype = {inproceedings} } O maior volume de dados gerado, trafegado e processado aumentaa demanda por mais poder de processamento e por algoritmos de compressãoeficientes. Este trabalho tem como objetivo explorar o paralelismo de streampara arquiteturas multi-core com GPUs na aplicação Dedup, usando SPar comCUDA e OpenCL. Apesar do desempenho não ser o esperado, o artigo contribuicom uma análise detalhada dos resultados e sugestões futuras de melhorias. |
Maliszewski, Anderson M; Fim, Gabriel R; Maron, Carlos A F; Vogel, Adriano; Griebler, Dalvan Avaliação de Desempenho em Contêineres LXD com Aplicações Científicas na Nuvem OpenNebula Inproceedings 19th Escola Regional de Alto Desempenho da Região Sul (ERAD/RS), Sociedade Brasileira de Computação, Três de Maio, RS, Brazil, 2019. Resumo | Links | BibTeX | Tags: Benchmark, Cloud computing @inproceedings{larcc:desempenho_LXD_Opennebula:ERAD:19, title = {Avaliação de Desempenho em Contêineres LXD com Aplicações Científicas na Nuvem OpenNebula}, author = {Anderson M Maliszewski and Gabriel R Fim and Carlos A F Maron and Adriano Vogel and Dalvan Griebler}, url = {http://larcc.setrem.com.br/wp-content/uploads/2019/04/192099.pdf}, year = {2019}, date = {2019-04-01}, booktitle = {19th Escola Regional de Alto Desempenho da Região Sul (ERAD/RS)}, publisher = {Sociedade Brasileira de Computação}, address = {Três de Maio, RS, Brazil}, abstract = {As nuvens privadas IaaS podem fornecer um ambiente atrativo paraaplicações científicas. No entanto, como existem diversos modelos de implan-tação e configuração, avaliar o desempenho dessas aplicações é um desafio.Este artigo tem como objetivo avaliar o desempenho de contêineres LXD ge-renciados pelo OpenNebula, utilizando os benchmarks da suite NPB-MPI. Osresultados mostram que o LXD não induz a grandes overheads no desempenho}, keywords = {Benchmark, Cloud computing}, pubstate = {published}, tppubtype = {inproceedings} } As nuvens privadas IaaS podem fornecer um ambiente atrativo paraaplicações científicas. No entanto, como existem diversos modelos de implan-tação e configuração, avaliar o desempenho dessas aplicações é um desafio.Este artigo tem como objetivo avaliar o desempenho de contêineres LXD ge-renciados pelo OpenNebula, utilizando os benchmarks da suite NPB-MPI. Osresultados mostram que o LXD não induz a grandes overheads no desempenho |
Stein, Charles M; Stein, Joao V; Boz, Leonardo; Rockenbach, Dinei A; Griebler, Dalvan Mandelbrot Streaming para Sistemas Multi-core com GPUs Inproceedings 19th Escola Regional de Alto Desempenho da Região Sul (ERAD/RS), Sociedade Brasileira de Computação, Três de Maio, RS, Brazil, 2019. Resumo | Links | BibTeX | Tags: GPGPU, Stream processing @inproceedings{larcc:mandelbrot_multicore_GPU:ERAD:19, title = {Mandelbrot Streaming para Sistemas Multi-core com GPUs}, author = {Charles M Stein and Joao V Stein and Leonardo Boz and Dinei A Rockenbach and Dalvan Griebler}, url = {http://larcc.setrem.com.br/wp-content/uploads/2019/04/192109.pdf}, year = {2019}, date = {2019-04-01}, booktitle = {19th Escola Regional de Alto Desempenho da Região Sul (ERAD/RS)}, publisher = {Sociedade Brasileira de Computação}, address = {Três de Maio, RS, Brazil}, abstract = {Este trabalho visa explorar o paralelismo na aplicação MandelbrotStreamingpara arquiteturas multi-core com GPUs, usando as bibliotecas Fast-Flow, TBB e SPar com CUDA. A implementação do paralelismo foi baseada nopadrão farm, alcançando speedup de 16x no sistema multi-core e de 77x em umambiente multi-core com duas GPUs. Os resultados evidenciam um melhor de-sempenho no uso de GPUs embora tenham sido identificadas futuras melhorias.}, keywords = {GPGPU, Stream processing}, pubstate = {published}, tppubtype = {inproceedings} } Este trabalho visa explorar o paralelismo na aplicação MandelbrotStreamingpara arquiteturas multi-core com GPUs, usando as bibliotecas Fast-Flow, TBB e SPar com CUDA. A implementação do paralelismo foi baseada nopadrão farm, alcançando speedup de 16x no sistema multi-core e de 77x em umambiente multi-core com duas GPUs. Os resultados evidenciam um melhor de-sempenho no uso de GPUs embora tenham sido identificadas futuras melhorias. |
Maron, Carlos A F; Vogel, Adriano; Griebler, Dalvan; Fernandes, Luiz Gustavo Should PARSEC Benchmarks be More Parametric? A Case Study with Dedup Inproceedings doi 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp. 217-221, IEEE, Pavia, Italy, 2019. Resumo | Links | BibTeX | Tags: Benchmark @inproceedings{MARON:parametric-parsec:PDP:19, title = {Should PARSEC Benchmarks be More Parametric? A Case Study with Dedup}, author = {Carlos A F Maron and Adriano Vogel and Dalvan Griebler and Luiz Gustavo Fernandes}, url = {https://doi.org/10.1109/EMPDP.2019.8671592}, doi = {10.1109/EMPDP.2019.8671592}, year = {2019}, date = {2019-02-01}, booktitle = {27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)}, pages = {217-221}, publisher = {IEEE}, address = {Pavia, Italy}, series = {PDP'19}, abstract = {Parallel applications of the same domain can present similar patterns of behavior and characteristics. Characterizing common application behaviors can help for understanding performance aspects in the real-world scenario. One way to better understand and evaluate applications' characteristics is by using customizable/parametric benchmarks that enable users to represent important characteristics at run-time. We observed that parameterization techniques should be better exploited in the available benchmarks, especially on stream processing domain. For instance, although widely used, the stream processing benchmarks available in PARSEC do not support the simulation and evaluation of relevant and modern characteristics. Therefore, our goal is to identify the stream parallelism characteristics present in PARSEC. We also implemented a ready to use parameterization support and evaluated the application behaviors considering relevant performance metrics for stream parallelism (service time, throughput, latency). We choose Dedup to be our case study. The experimental results have shown performance improvements in our parameterization support for Dedup. Moreover, this support increased the customization space for benchmark users, which is simple to use. In the future, our solution can be potentially explored on different parallel architectures and parallel programming frameworks.}, keywords = {Benchmark}, pubstate = {published}, tppubtype = {inproceedings} } Parallel applications of the same domain can present similar patterns of behavior and characteristics. Characterizing common application behaviors can help for understanding performance aspects in the real-world scenario. One way to better understand and evaluate applications' characteristics is by using customizable/parametric benchmarks that enable users to represent important characteristics at run-time. We observed that parameterization techniques should be better exploited in the available benchmarks, especially on stream processing domain. For instance, although widely used, the stream processing benchmarks available in PARSEC do not support the simulation and evaluation of relevant and modern characteristics. Therefore, our goal is to identify the stream parallelism characteristics present in PARSEC. We also implemented a ready to use parameterization support and evaluated the application behaviors considering relevant performance metrics for stream parallelism (service time, throughput, latency). We choose Dedup to be our case study. The experimental results have shown performance improvements in our parameterization support for Dedup. Moreover, this support increased the customization space for benchmark users, which is simple to use. In the future, our solution can be potentially explored on different parallel architectures and parallel programming frameworks. |
Stein, Charles Michael; Griebler, Dalvan; Danelutto, Marco; Fernandes, Luiz Gustavo Stream Parallelism on the LZSS Data Compression Application for Multi-Cores with GPUs Inproceedings doi 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp. 247-251, IEEE, Pavia, Italy, 2019. Resumo | Links | BibTeX | Tags: GPGPU, Stream processing @inproceedings{STEIN:LZSS-multigpu:PDP:19, title = {Stream Parallelism on the LZSS Data Compression Application for Multi-Cores with GPUs}, author = {Charles Michael Stein and Dalvan Griebler and Marco Danelutto and Luiz Gustavo Fernandes}, url = {https://doi.org/10.1109/EMPDP.2019.8671624}, doi = {10.1109/EMPDP.2019.8671624}, year = {2019}, date = {2019-02-01}, booktitle = {27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)}, pages = {247-251}, publisher = {IEEE}, address = {Pavia, Italy}, series = {PDP'19}, abstract = {GPUs have been used to accelerate different data parallel applications. The challenge consists in using GPUs to accelerate stream processing applications. Our goal is to investigate and evaluate whether stream parallel applications may benefit from parallel execution on both CPU and GPU cores. In this paper, we introduce new parallel algorithms for the Lempel-Ziv-Storer-Szymanski (LZSS) data compression application. We implemented the algorithms targeting both CPUs and GPUs. GPUs have been used with CUDA and OpenCL to exploit inner algorithm data parallelism. Outer stream parallelism has been exploited using CPU cores through SPar. The parallel implementation of LZSS achieved 135 fold speedup using a multi-core CPU and two GPUs. We also observed speedups in applications where we were not expecting to get it using the same combine data-stream parallel exploitation techniques.}, keywords = {GPGPU, Stream processing}, pubstate = {published}, tppubtype = {inproceedings} } GPUs have been used to accelerate different data parallel applications. The challenge consists in using GPUs to accelerate stream processing applications. Our goal is to investigate and evaluate whether stream parallel applications may benefit from parallel execution on both CPU and GPU cores. In this paper, we introduce new parallel algorithms for the Lempel-Ziv-Storer-Szymanski (LZSS) data compression application. We implemented the algorithms targeting both CPUs and GPUs. GPUs have been used with CUDA and OpenCL to exploit inner algorithm data parallelism. Outer stream parallelism has been exploited using CPU cores through SPar. The parallel implementation of LZSS achieved 135 fold speedup using a multi-core CPU and two GPUs. We also observed speedups in applications where we were not expecting to get it using the same combine data-stream parallel exploitation techniques. |
Maliszewski, Anderson M; Griebler, Dalvan Avaliação de Desempenho da Agregação de Interfaces de Rede em Ambientes de Nuvem Privada HiPerfCloud: High Performance in Cloud Technical Report doi Laboratory of Advanced Research on Cloud Computing (LARCC) 2019. Links | BibTeX | Tags: Benchmark, Cloud computing @techreport{larcc:rt5:19, title = {Avaliação de Desempenho da Agregação de Interfaces de Rede em Ambientes de Nuvem Privada HiPerfCloud: High Performance in Cloud}, author = {Anderson M Maliszewski and Dalvan Griebler}, url = {http://larcc.setrem.com.br/wp-content/uploads/2019/12/LARCC_HiPerfCloud_RT5_2019.pdf}, doi = {10.13140/RG.2.2.14800.87044}, year = {2019}, date = {2019-01-01}, institution = {Laboratory of Advanced Research on Cloud Computing (LARCC)}, keywords = {Benchmark, Cloud computing}, pubstate = {published}, tppubtype = {techreport} } |
2018 |
Maliszewski, Anderson M; Griebler, Dalvan; Vogel, Adriano; Schepke, Claudio On the Performance of Multithreading Applications under Private Cloud Conditions Inproceedings doi Symposium on High Performance Computing Systems (WSCAD), pp. 273-273, IEEE, São Paulo, Brazil, 2018. Resumo | Links | BibTeX | Tags: Cloud computing @inproceedings{larcc:multithreading_cloud:WSCAD:18, title = {On the Performance of Multithreading Applications under Private Cloud Conditions}, author = {Anderson M Maliszewski and Dalvan Griebler and Adriano Vogel and Claudio Schepke}, url = {https://doi.org/10.1109/WSCAD.2018.00055}, doi = {10.1109/WSCAD.2018.00055}, year = {2018}, date = {2018-10-01}, booktitle = {Symposium on High Performance Computing Systems (WSCAD)}, pages = {273-273}, publisher = {IEEE}, address = {São Paulo, Brazil}, abstract = {IaaS private clouds provide an attractive environment for scientific applications. However, the performance is a challenge, as additional abstraction layers imposed by the virtualization can cause overheads and bottlenecks. This paper contributes to a performance analysis of applications with dedicated and shared resources environments under private cloud conditions, deployed with container (LXC) or kernel-based (KVM) instances. We selected five benchmarks from PARSEC suite. In the experimental results, identify a performance pattern of behavior among the applications was hard. For a set of multi-threading applications, the KVM-based cloud instances achieved better performance, however, in the other set of applications, the LXC-based cloud instances performed better.}, keywords = {Cloud computing}, pubstate = {published}, tppubtype = {inproceedings} } IaaS private clouds provide an attractive environment for scientific applications. However, the performance is a challenge, as additional abstraction layers imposed by the virtualization can cause overheads and bottlenecks. This paper contributes to a performance analysis of applications with dedicated and shared resources environments under private cloud conditions, deployed with container (LXC) or kernel-based (KVM) instances. We selected five benchmarks from PARSEC suite. In the experimental results, identify a performance pattern of behavior among the applications was hard. For a set of multi-threading applications, the KVM-based cloud instances achieved better performance, however, in the other set of applications, the LXC-based cloud instances performed better. |
Griebler, Dalvan; De Sensi, Daniele ; Vogel, Adriano; Danelutto, Marco; Fernandes, Luiz Gustavo Service Level Objectives via C++11 Attributes Inproceedings doi Euro-Par 2018: Parallel Processing Workshops, pp. 745-756, Springer, Turin, Italy, 2018. Resumo | Links | BibTeX | Tags: Parallel programming @inproceedings{GRIEBLER:SLO-SPar-Nornir:REPARA:18, title = {Service Level Objectives via C++11 Attributes}, author = {Dalvan Griebler and Daniele {De Sensi} and Adriano Vogel and Marco Danelutto and Luiz Gustavo Fernandes}, url = {http://dx.doi.org/10.1007/978-3-030-10549-5_58}, doi = {10.1007/978-3-030-10549-5_58}, year = {2018}, date = {2018-08-01}, booktitle = {Euro-Par 2018: Parallel Processing Workshops}, pages = {745-756}, publisher = {Springer}, address = {Turin, Italy}, series = {Lecture Notes in Computer Science}, abstract = {In recent years, increasing attention has been given to the possibility of guaranteeing Service Level Objectives (SLOs) to users about their applications, either regarding performance or power consumption. SLO can be implemented for parallel applications since they can provide many control knobs (e.g., the number of threads to use, the clock frequency of the cores, etc.) to tune the performance and power consumption of the application. Different from most of the existing approaches, we target sequential stream processing applications by proposing a solution based on C++ annotations. The user specifies which parts of the code to parallelize and what type of requirements should be enforced on that part of the code. Our solution first automatically parallelizes the annotated code and then applies self-adaptation approaches at run-time to enforce the user-expressed objectives. We ran experiments on different real-world applications, showing its simplicity and effectiveness.}, keywords = {Parallel programming}, pubstate = {published}, tppubtype = {inproceedings} } In recent years, increasing attention has been given to the possibility of guaranteeing Service Level Objectives (SLOs) to users about their applications, either regarding performance or power consumption. SLO can be implemented for parallel applications since they can provide many control knobs (e.g., the number of threads to use, the clock frequency of the cores, etc.) to tune the performance and power consumption of the application. Different from most of the existing approaches, we target sequential stream processing applications by proposing a solution based on C++ annotations. The user specifies which parts of the code to parallelize and what type of requirements should be enforced on that part of the code. Our solution first automatically parallelizes the annotated code and then applies self-adaptation approaches at run-time to enforce the user-expressed objectives. We ran experiments on different real-world applications, showing its simplicity and effectiveness. |
Vogel, Adriano; Griebler, Dalvan; De Sensi, Daniele ; Danelutto, Marco; Fernandes, Luiz Gustavo Autonomic and Latency-Aware Degree of Parallelism Management in SPar Inproceedings doi Euro-Par 2018: Parallel Processing Workshops, pp. 28-39, Springer, Turin, Italy, 2018. Resumo | Links | BibTeX | Tags: Self-adaptation, Stream processing @inproceedings{VOGEL:Adaptive-Latency-SPar:AutoDaSP:18, title = {Autonomic and Latency-Aware Degree of Parallelism Management in SPar}, author = {Adriano Vogel and Dalvan Griebler and Daniele {De Sensi} and Marco Danelutto and Luiz Gustavo Fernandes}, url = {http://dx.doi.org/10.1007/978-3-030-10549-5_3}, doi = {10.1007/978-3-030-10549-5_3}, year = {2018}, date = {2018-08-01}, booktitle = {Euro-Par 2018: Parallel Processing Workshops}, pages = {28-39}, publisher = {Springer}, address = {Turin, Italy}, series = {Lecture Notes in Computer Science}, abstract = {Stream processing applications became a representative workload in current computing systems. A significant part of these applications demands parallelism to increase performance. However, programmers are often facing a trade-off between coding productivity and performance when introducing parallelism. SPar was created for balancing this trade-off to the application programmers by using the C++11 attributes’ annotation mechanism. In SPar and other programming frameworks for stream processing applications, the manual definition of the number of replicas to be used for the stream operators is a challenge. In addition to that, low latency is required by several stream processing applications. We noted that explicit latency requirements are poorly considered on the state-of-the-art parallel programming frameworks. Since there is a direct relationship between the number of replicas and the latency of the application, in this work we propose an autonomic and adaptive strategy to choose the proper number of replicas in SPar to address latency constraints. We experimentally evaluated our implemented strategy and demonstrated its effectiveness on a real-world application, demonstrating that our adaptive strategy can provide higher abstraction levels while automatically managing the latency.}, keywords = {Self-adaptation, Stream processing}, pubstate = {published}, tppubtype = {inproceedings} } Stream processing applications became a representative workload in current computing systems. A significant part of these applications demands parallelism to increase performance. However, programmers are often facing a trade-off between coding productivity and performance when introducing parallelism. SPar was created for balancing this trade-off to the application programmers by using the C++11 attributes’ annotation mechanism. In SPar and other programming frameworks for stream processing applications, the manual definition of the number of replicas to be used for the stream operators is a challenge. In addition to that, low latency is required by several stream processing applications. We noted that explicit latency requirements are poorly considered on the state-of-the-art parallel programming frameworks. Since there is a direct relationship between the number of replicas and the latency of the application, in this work we propose an autonomic and adaptive strategy to choose the proper number of replicas in SPar to address latency constraints. We experimentally evaluated our implemented strategy and demonstrated its effectiveness on a real-world application, demonstrating that our adaptive strategy can provide higher abstraction levels while automatically managing the latency. |
Maliszewski, Anderson M; Griebler, Dalvan; Schepke, Claudio; Ditter, Alexander; Fey, Dietmar; Fernandes, Luiz Gustavo The NAS Benchmark Kernels for Single and Multi-Tenant Cloud Instances with LXC/KVM Inproceedings doi International Conference on High Performance Computing & Simulation (HPCS), pp. 359-366, IEEE, Orleans, France, 2018. Resumo | Links | BibTeX | Tags: Benchmark, Cloud computing @inproceedings{larcc:NAS_cloud_LXC_KVM:HPCS:2018, title = {The NAS Benchmark Kernels for Single and Multi-Tenant Cloud Instances with LXC/KVM}, author = {Anderson M Maliszewski and Dalvan Griebler and Claudio Schepke and Alexander Ditter and Dietmar Fey and Luiz Gustavo Fernandes}, url = {https://doi.org/10.1109/HPCS.2018.00066}, doi = {10.1109/HPCS.2018.00066}, year = {2018}, date = {2018-07-01}, booktitle = {International Conference on High Performance Computing & Simulation (HPCS)}, pages = {359-366}, publisher = {IEEE}, address = {Orleans, France}, series = {HPCS'18}, abstract = {Private IaaS clouds are an attractive environment for scientific workloads and applications. It provides advantages such as almost instantaneous availability of high-performance computing in a single node as well as compute clusters, easy access for researchers, and users that do not have access to conventional supercomputers. Furthermore, a cloud infrastructure provides elasticity and scalability to ensure and manage any software dependency on the system with no third-party dependency for researchers. However, one of the biggest challenges is to avoid significant performance degradation when migrating these applications from physical nodes to a cloud environment. Also, we lack more research investigations for multi-tenant cloud instances. In this paper, our goal is to perform a comparative performance evaluation of scientific applications with single and multi-tenancy cloud instances using KVM and LXC virtualization technologies under private cloud conditions. All analyses and evaluations were carried out based on NAS Benchmark kernels to simulate different types of workloads. We applied statistic significance tests to highlight the differences. The results have shown that applications running on LXC-based cloud instances outperform KVM-based cloud instances in 93.75% of the experiments w.r.t single tenant. Regarding multi-tenant, LXC instances outperform KVM instances in 45% of the results, where the performance differences were not as significant as expected.}, keywords = {Benchmark, Cloud computing}, pubstate = {published}, tppubtype = {inproceedings} } Private IaaS clouds are an attractive environment for scientific workloads and applications. It provides advantages such as almost instantaneous availability of high-performance computing in a single node as well as compute clusters, easy access for researchers, and users that do not have access to conventional supercomputers. Furthermore, a cloud infrastructure provides elasticity and scalability to ensure and manage any software dependency on the system with no third-party dependency for researchers. However, one of the biggest challenges is to avoid significant performance degradation when migrating these applications from physical nodes to a cloud environment. Also, we lack more research investigations for multi-tenant cloud instances. In this paper, our goal is to perform a comparative performance evaluation of scientific applications with single and multi-tenancy cloud instances using KVM and LXC virtualization technologies under private cloud conditions. All analyses and evaluations were carried out based on NAS Benchmark kernels to simulate different types of workloads. We applied statistic significance tests to highlight the differences. The results have shown that applications running on LXC-based cloud instances outperform KVM-based cloud instances in 93.75% of the experiments w.r.t single tenant. Regarding multi-tenant, LXC instances outperform KVM instances in 45% of the results, where the performance differences were not as significant as expected. |
Klein, Maikel; Maliszewski, Anderson Mattheus; Griebler, Dalvan Avaliação do Desempenho do Protocolo Bonding em Máquinas Virtuais LXC e KVM Inproceedings 15th Escola Regional de Redes de Computadores (ERRC), pp. 1-8, Sociedade Brasileira de Computação, Pelotas, BR, 2018. Resumo | Links | BibTeX | Tags: Benchmark, Cloud computing @inproceedings{larcc:link_agreggation:ERRC:18, title = {Avaliação do Desempenho do Protocolo Bonding em Máquinas Virtuais LXC e KVM}, author = {Maikel Klein and Anderson Mattheus Maliszewski and Dalvan Griebler}, url = {http://larcc.setrem.com.br/wpcontent/uploads/2018/11/ERRC_2018__Link_Aggregation_.pdf}, year = {2018}, date = {2018-07-01}, booktitle = {15th Escola Regional de Redes de Computadores (ERRC)}, pages = {1-8}, publisher = {Sociedade Brasileira de Computação}, address = {Pelotas, BR}, abstract = {O processamento de grandes volumes de dados (Big Data) e seu armazenamento distribuído vem aumentado gradualmente o uso da rede. Com isso, torna-se necessário o uso de tecnologias para otimizar a largura de banda. Uma das soluções de baixo custo e fácil implementação é a agregação de link. Além disso, a virtualização, usada como base na computação em nuvem, oferece vários benefícios utilizados no Big Data. O objetivo deste trabalho é avaliar o desempenho de rede usando a agregação de link com o protocolo bonding em máquinas virtuais LXC e KVM. Os resultados mostram que o protocolo bonding tem comportamento similar com ambos tipos de virtualização.}, keywords = {Benchmark, Cloud computing}, pubstate = {published}, tppubtype = {inproceedings} } O processamento de grandes volumes de dados (Big Data) e seu armazenamento distribuído vem aumentado gradualmente o uso da rede. Com isso, torna-se necessário o uso de tecnologias para otimizar a largura de banda. Uma das soluções de baixo custo e fácil implementação é a agregação de link. Além disso, a virtualização, usada como base na computação em nuvem, oferece vários benefícios utilizados no Big Data. O objetivo deste trabalho é avaliar o desempenho de rede usando a agregação de link com o protocolo bonding em máquinas virtuais LXC e KVM. Os resultados mostram que o protocolo bonding tem comportamento similar com ambos tipos de virtualização. |
Griebler, Dalvan; Vogel, Adriano; Maron, Carlos A F; Maliszewski, Anderson M; Schepke, Claudio; Fernandes, Luiz Gustavo Performance of Data Mining, Media, and Financial Applications under Private Cloud Conditions Inproceedings doi IEEE Symposium on Computers and Communications (ISCC), pp. 1530-1346, IEEE, Natal, Brazil, 2018. Resumo | Links | BibTeX | Tags: Benchmark, Cloud computing @inproceedings{larcc:parsec_cloudstack_lxc_kvm:ISCC:2018, title = {Performance of Data Mining, Media, and Financial Applications under Private Cloud Conditions}, author = {Dalvan Griebler and Adriano Vogel and Carlos A F Maron and Anderson M Maliszewski and Claudio Schepke and Luiz Gustavo Fernandes}, url = {https://dx.doi.org/10.1109/ISCC.2018.8538759}, doi = {10.1109/ISCC.2018.8538759}, year = {2018}, date = {2018-06-01}, booktitle = {IEEE Symposium on Computers and Communications (ISCC)}, pages = {1530-1346}, publisher = {IEEE}, address = {Natal, Brazil}, series = {ISCC'18}, abstract = {This paper contributes to a performance analysis of real-world workloads under private cloud conditions. We selected six benchmarks from PARSEC related to three mainstream application domains (financial, data mining, and media processing). Our goal was to evaluate these application domains in different cloud instances and deployment environments, concerning container or kernel-based instances and using dedicated or shared machine resources. Experiments have shown that performance varies according to the application characteristics, virtualization technology, and cloud environment. Results highlighted that financial, data mining, and media processing applications running in the LXC instances tend to outperform KVM when there is a dedicated machine resource environment. However, when two instances are sharing the same machine resources, these applications tend to achieve better performance in the KVM instances. Finally, financial applications achieved better performance in the cloud than media and data mining.}, keywords = {Benchmark, Cloud computing}, pubstate = {published}, tppubtype = {inproceedings} } This paper contributes to a performance analysis of real-world workloads under private cloud conditions. We selected six benchmarks from PARSEC related to three mainstream application domains (financial, data mining, and media processing). Our goal was to evaluate these application domains in different cloud instances and deployment environments, concerning container or kernel-based instances and using dedicated or shared machine resources. Experiments have shown that performance varies according to the application characteristics, virtualization technology, and cloud environment. Results highlighted that financial, data mining, and media processing applications running in the LXC instances tend to outperform KVM when there is a dedicated machine resource environment. However, when two instances are sharing the same machine resources, these applications tend to achieve better performance in the KVM instances. Finally, financial applications achieved better performance in the cloud than media and data mining. |
Rista, Cassiano; Teixeira, Marcelo; Griebler, Dalvan; Fernandes, Luiz Gustavo Evaluating, Estimating, and Improving Network Performance in Container-based Clouds Inproceedings doi IEEE Symposium on Computers and Communications (ISCC), pp. 1530-1346, IEEE, Natal, Brazil, 2018. Resumo | Links | BibTeX | Tags: Cloud computing @inproceedings{larcc:network_performance_container:ISCC:2018, title = {Evaluating, Estimating, and Improving Network Performance in Container-based Clouds}, author = {Cassiano Rista and Marcelo Teixeira and Dalvan Griebler and Luiz Gustavo Fernandes}, url = {https://doi.org/10.1109/ISCC.2018.8538558}, doi = {10.1109/ISCC.2018.8538558}, year = {2018}, date = {2018-06-01}, booktitle = {IEEE Symposium on Computers and Communications (ISCC)}, pages = {1530-1346}, publisher = {IEEE}, address = {Natal, Brazil}, series = {ISCC'18}, abstract = {Cloud computing has recently attracted a great deal of interest from both industry and academia, emerging as an important paradigm to improve resource utilization, efficiency, flexibility, and pay-per-use. However, cloud platforms inherently include a virtualization layer that imposes performance degradation on network-intensive applications. Thus, it is crucial to anticipate possible performance degradation to resolve system bottlenecks. This paper uses the Petri Nets approach to create different models for evaluating, estimating, and improving network performance in container-based cloud environments. Based on model estimations, we assessed the network bandwidth utilization of the system under different setups. Then, by identifying possible bottlenecks, we show how the system could be modified to improve performance. We then tested how the model would behave through real-world experiments. When the model indicates probable bandwidth saturation, we propose a link aggregation approach to increase bandwidth, using lightweight virtualization to reduce virtualization overhead. Results reveal that our model anticipates the structural and behavioral characteristics of the network in the cloud environment. Therefore, it systematically improves network efficiency, which saves effort, time, and money.}, keywords = {Cloud computing}, pubstate = {published}, tppubtype = {inproceedings} } Cloud computing has recently attracted a great deal of interest from both industry and academia, emerging as an important paradigm to improve resource utilization, efficiency, flexibility, and pay-per-use. However, cloud platforms inherently include a virtualization layer that imposes performance degradation on network-intensive applications. Thus, it is crucial to anticipate possible performance degradation to resolve system bottlenecks. This paper uses the Petri Nets approach to create different models for evaluating, estimating, and improving network performance in container-based cloud environments. Based on model estimations, we assessed the network bandwidth utilization of the system under different setups. Then, by identifying possible bottlenecks, we show how the system could be modified to improve performance. We then tested how the model would behave through real-world experiments. When the model indicates probable bandwidth saturation, we propose a link aggregation approach to increase bandwidth, using lightweight virtualization to reduce virtualization overhead. Results reveal that our model anticipates the structural and behavioral characteristics of the network in the cloud environment. Therefore, it systematically improves network efficiency, which saves effort, time, and money. |
Stein, Charles Programação Paralela para GPU em Aplicações de Processamento Stream Undergraduate Thesis Undergraduate Thesis, 2018. Resumo | Links | BibTeX | Tags: GPGPU, Stream processing @misc{larcc:charles_stein:TCC:18, title = {Programação Paralela para GPU em Aplicações de Processamento Stream}, author = {Charles Stein}, url = {http://larcc.setrem.com.br/wp-content/uploads/2018/11/TCC_SETREM__Charles_Stein_1.pdf}, year = {2018}, date = {2018-06-01}, address = {Três de Maio, RS, Brazil}, school = {Sociedade Educacional Três de Maio (SETREM)}, abstract = {Stream processing applications are used in many areas. They usually require real-time processing and have a high computational load. The parallelization of this type of application is necessary. The use of GPUs can hypothetically increase the performance of this stream processing applications. This work presents the study and parallel software implementation for GPU on stream processing applications. Applications of different areas were chosen and parallelized for CPU and GPU. A set of experiments were conducted and the results achieved were analyzed. Therefore, the Sobel, LZSS, Dedup, and Black-Scholes applications were parallelized. The Sobel filter did not gain performance, while the LZSS, Dudup and Black-Scholes obtained a Speedup of 36x, 13x and 6.9x respectively. In addition to performance, the source lines of code from the implementations with CUDA and OpenCL libraries were measured in order to analyze the code intrusion. The tests performed showed that in some applications the use of GPU is advantageous, while in other applications there are no significant gains when compared to the parallel versions in CPU.}, howpublished = {Undergraduate Thesis}, keywords = {GPGPU, Stream processing}, pubstate = {published}, tppubtype = {misc} } Stream processing applications are used in many areas. They usually require real-time processing and have a high computational load. The parallelization of this type of application is necessary. The use of GPUs can hypothetically increase the performance of this stream processing applications. This work presents the study and parallel software implementation for GPU on stream processing applications. Applications of different areas were chosen and parallelized for CPU and GPU. A set of experiments were conducted and the results achieved were analyzed. Therefore, the Sobel, LZSS, Dedup, and Black-Scholes applications were parallelized. The Sobel filter did not gain performance, while the LZSS, Dudup and Black-Scholes obtained a Speedup of 36x, 13x and 6.9x respectively. In addition to performance, the source lines of code from the implementations with CUDA and OpenCL libraries were measured in order to analyze the code intrusion. The tests performed showed that in some applications the use of GPU is advantageous, while in other applications there are no significant gains when compared to the parallel versions in CPU. |
Klein, Maikel; Petter, Rudinei Avaliação do Desempenho dos Protocolos Bonding e MPTCP em Instâncias LXC e KVM com Nuvem OpenNebula Undergraduate Thesis Undergraduate Thesis, 2018. Resumo | Links | BibTeX | Tags: Cloud computing @misc{larcc:maikel_rudinei:TCC:18, title = {Avaliação do Desempenho dos Protocolos Bonding e MPTCP em Instâncias LXC e KVM com Nuvem OpenNebula}, author = {Maikel Klein and Rudinei Petter}, url = {http://larcc.setrem.com.br/wp-content/uploads/2018/11/TCC_SETREM__Maikel_e_Rudinei_.pdf}, year = {2018}, date = {2018-06-01}, address = {Três de Maio, RS, Brazil}, school = {Sociedade Educacional Três de Maio (SETREM)}, abstract = {The study of the Bonding and MPTCP protocols have as a benefit the use of existing infrastructure, using the total capacity that the hardware offers for increasing performance and redundancy to failures. The main objective of this work is to study, understand, implement the private cloud with the Bonding and MPTCP protocols over LXC and KVM instances to identify which protocols can achieve the best performance in an OpenNebula cloud environment. The results demonstrated that the application of the bonding protocol has the same behavior in cloud environments with LXC and KVM instances. The MPTCP protocol presented results close to the native environment only in KVM instances, with degradation of performance in the network when applied in a cloud environment with LXC instances.}, howpublished = {Undergraduate Thesis}, keywords = {Cloud computing}, pubstate = {published}, tppubtype = {misc} } The study of the Bonding and MPTCP protocols have as a benefit the use of existing infrastructure, using the total capacity that the hardware offers for increasing performance and redundancy to failures. The main objective of this work is to study, understand, implement the private cloud with the Bonding and MPTCP protocols over LXC and KVM instances to identify which protocols can achieve the best performance in an OpenNebula cloud environment. The results demonstrated that the application of the bonding protocol has the same behavior in cloud environments with LXC and KVM instances. The MPTCP protocol presented results close to the native environment only in KVM instances, with degradation of performance in the network when applied in a cloud environment with LXC instances. |
Rockenbach, Dinei A; Anderle, Nadine; Griebler, Dalvan; Souza, Samuel Estudo Comparativo de Bancos de Dados NoSQL Journal Article doi Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC), 1 (8), 2018. Resumo | Links | BibTeX | Tags: Benchmark, Databases, NoSQL databases @article{larcc:comparativo_nosql:REABTIC:18, title = {Estudo Comparativo de Bancos de Dados NoSQL}, author = {Dinei A Rockenbach and Nadine Anderle and Dalvan Griebler and Samuel Souza}, url = {https://revistas.setrem.com.br/index.php/reabtic/article/view/286}, doi = {10.5281/zenodo.1228503}, year = {2018}, date = {2018-04-01}, journal = {Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC)}, volume = {1}, number = {8}, publisher = {SETREM}, address = {Três de Maio, RS, Brazil}, abstract = {NoSQL databases emerged to fill limitations of the relational databases. The many options for each one of the categories, and their distinct characteristics and focus makes this assessment very difficult for decision makers. Most of the time, decisions are taken without the attention and background deserved due to the related complexities. This article aims to compare the relevant characteristics of each database, abstracting the information that bases the market marketing of them. We concluded that although the databases are labeled in a specific category, there is a significant disparity in the functionalities offered by each of them. Also, we observed that new databases are emerging even though there are well-established databases in each one of the categories studied. Finally, it is very challenging to suggest the best database for each category because each scenario has its requirements, which requires a careful analysis where our work can help to simplify this kind of decision.}, keywords = {Benchmark, Databases, NoSQL databases}, pubstate = {published}, tppubtype = {article} } NoSQL databases emerged to fill limitations of the relational databases. The many options for each one of the categories, and their distinct characteristics and focus makes this assessment very difficult for decision makers. Most of the time, decisions are taken without the attention and background deserved due to the related complexities. This article aims to compare the relevant characteristics of each database, abstracting the information that bases the market marketing of them. We concluded that although the databases are labeled in a specific category, there is a significant disparity in the functionalities offered by each of them. Also, we observed that new databases are emerging even though there are well-established databases in each one of the categories studied. Finally, it is very challenging to suggest the best database for each category because each scenario has its requirements, which requires a careful analysis where our work can help to simplify this kind of decision. |
Stein, Charles M; Griebler, Dalvan Explorando o Paralelismo de Stream em CPU e de Dados em GPU na Aplicação de Filtro Sobel Inproceedings 18th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 137-140, Sociedade Brasileira de Computação, Porto Alegre, RS, Brazil, 2018. Resumo | Links | BibTeX | Tags: GPGPU, Stream processing @inproceedings{larcc:stream_gpu_cuda:ERAD:18, title = {Explorando o Paralelismo de Stream em CPU e de Dados em GPU na Aplicação de Filtro Sobel}, author = {Charles M Stein and Dalvan Griebler}, url = {http://larcc.setrem.com.br/wp-content/uploads/2018/04/LARCC_ERAD_IC_Stein_2018.pdf}, year = {2018}, date = {2018-04-01}, booktitle = {18th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)}, pages = {137-140}, publisher = {Sociedade Brasileira de Computação}, address = {Porto Alegre, RS, Brazil}, abstract = {O objetivo deste estudo é a paralelização combinada do stream em CPU e dos dados em GPU usando uma aplicação de filtro sobel. Foi realizada uma avaliação do desempenho de OpenCL, OpenACC e CUDA com o algorí-timo de multiplicação de matrizes para escolha da ferramenta a ser usada com a SPar. Concluiu-se que apesar da GPU apresentar um speedup de 11.81x com CUDA, o uso exclusivo da CPU com a SPar é mais vantajoso nesta aplicação.}, keywords = {GPGPU, Stream processing}, pubstate = {published}, tppubtype = {inproceedings} } O objetivo deste estudo é a paralelização combinada do stream em CPU e dos dados em GPU usando uma aplicação de filtro sobel. Foi realizada uma avaliação do desempenho de OpenCL, OpenACC e CUDA com o algorí-timo de multiplicação de matrizes para escolha da ferramenta a ser usada com a SPar. Concluiu-se que apesar da GPU apresentar um speedup de 11.81x com CUDA, o uso exclusivo da CPU com a SPar é mais vantajoso nesta aplicação. |
Maliszewski, Anderson M; Griebler, Dalvan; Schepke, Claudio Desempenho em Instâncias LXC e KVM de Nuvem Privada usando Aplicações Científicas Inproceedings 18th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 129-132, Sociedade Brasileira de Computação, Porto Alegre, RS, Brazil, 2018. Resumo | Links | BibTeX | Tags: Cloud computing @inproceedings{larcc:cloudtack_lxc_kvm:ERAD:18, title = {Desempenho em Instâncias LXC e KVM de Nuvem Privada usando Aplicações Científicas}, author = {Anderson M Maliszewski and Dalvan Griebler and Claudio Schepke}, url = {http://larcc.setrem.com.br/wp-content/uploads/2018/04/LARCC_ERAD_IC_MAliszweski_2018.pdf}, year = {2018}, date = {2018-04-01}, booktitle = {18th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)}, pages = {129-132}, publisher = {Sociedade Brasileira de Computação}, address = {Porto Alegre, RS, Brazil}, abstract = {As nuvens privadas IaaS oferecem um ambiente atraente para aplicações científicas. Como este ambiente possui camadas adicionais de abstração, alcançar um bom desempenho é um desafio. O objetivo é realizar uma avaliação de desempenho das tecnologias de virtualização baseadas em KVM e LXC gerenciadas pelo CloudStack, usando benchmarks da suite NPB-OMP. Os resultados revelaram que LXC supera KVM em 93,75% dos experimentos.}, keywords = {Cloud computing}, pubstate = {published}, tppubtype = {inproceedings} } As nuvens privadas IaaS oferecem um ambiente atraente para aplicações científicas. Como este ambiente possui camadas adicionais de abstração, alcançar um bom desempenho é um desafio. O objetivo é realizar uma avaliação de desempenho das tecnologias de virtualização baseadas em KVM e LXC gerenciadas pelo CloudStack, usando benchmarks da suite NPB-OMP. Os resultados revelaram que LXC supera KVM em 93,75% dos experimentos. |
Maron, Carlos A F; Vogel, Adriano; Griebler, Dalvan Caracterizando a Implantação e o Desempenho de Aplicações em Ambientes de Nuvem Privada com Recursos Compartilhados e Dedicados Technical Report doi Laboratory of Advanced Research on Cloud Computing (LARCC) 2018. Links | BibTeX | Tags: Benchmark, Cloud computing @techreport{larcc:rt4:18, title = {Caracterizando a Implantação e o Desempenho de Aplicações em Ambientes de Nuvem Privada com Recursos Compartilhados e Dedicados}, author = {Carlos A F Maron and Adriano Vogel and Dalvan Griebler}, url = {http://larcc.setrem.com.br/wp-content/uploads/2018/12/LARCC_HiPerfCloud_RT4_2017.pdf}, doi = {10.13140/RG.2.2.14176.74240}, year = {2018}, date = {2018-01-01}, institution = {Laboratory of Advanced Research on Cloud Computing (LARCC)}, keywords = {Benchmark, Cloud computing}, pubstate = {published}, tppubtype = {techreport} } |
2017 |
Vogel, Adriano; Griebler, Dalvan; Leiria, Raul; Schepke, Claudio Implantação de Ambiente de Nuvem e Funcionalidades para Alta Disponibilidade Usando CloudStack Short Course Short Course, 2017. Resumo | Links | BibTeX | Tags: Cloud computing @misc{larcc:course:CloudStack:ERRC:17, title = {Implantação de Ambiente de Nuvem e Funcionalidades para Alta Disponibilidade Usando CloudStack}, author = {Adriano Vogel and Dalvan Griebler and Raul Leiria and Claudio Schepke}, url = {http://larcc.setrem.com.br/wp-content/uploads/2018/03/ERRC_2017_CloudStack.pdf}, year = {2017}, date = {2017-09-01}, address = {Santa Maria, RS, Brazil}, abstract = {A computação em nuvem surgiu a partir do uso combinado de diferentes abordagens (virtualização, clusters computacionais, grids, redes de computadores). Atualmente, ela está normatizada pela ISO e NIST, onde sua arquitetura base é composta pelos modelos de implantação (público, privado, híbrido e comunitário) e modelos de serviço (IaaS - Infrastructure as a Service, PaaS - Platform as a Service e SaaS - Software as a Service). As principais características da computação em nuvem é a alocação de recursos sob demanda e o pagamento apenas dos recursos que o usuário utilizou. Além de ser um modelo de negócio interessante para empresas, a computação em nuvem tem sido uma alternativa simples e econômica para a maior parte dos datacenters privados. Junto a isso, emergiram algumas ferramentas que permitem o gerenciamento desta infraestrutura (e.g., OpenStack, CloudStack e OpenNebula). Um aspecto importante em ambientes de nuvem é a alta disponibilidade de aplicações executadas na infraestrutura. Diversas aplicações não podem sofrer com interrupções ou falhas enquanto executadas. Por isso, técnicas de redundância e tolerância a falhas são implantadas no nível da infraestrutura e nas aplicações. Nas infraestruturas de nuvem, existe uma crescente demanda para manter online servidores e máquinas virtuais, porém, nas ferramentas open source de IaaS é notável os desafios para alta disponibilidade dos ambientes. Neste minicurso, o objetivo é introduzir o tema computação em nuvem no modelo IaaS usando a ferramenta de gerenciamento de código aberto CloudStack. O minicurso tem enfoque prático, ao implantar uma nuvem privada usando CloudStack e evidenciar as funcionalidades presentes na ferramenta para alta disponibilidade em ambientes de nuvem. Espera-se que ao final do curso, o participante consiga implantar um ambiente de nuvem usando a ferramenta e tenha conhecimento sobre a importância, funcionalidades e desafios para alta disponibilidade em ambientes de produção.}, howpublished = {Short Course}, keywords = {Cloud computing}, pubstate = {published}, tppubtype = {misc} } A computação em nuvem surgiu a partir do uso combinado de diferentes abordagens (virtualização, clusters computacionais, grids, redes de computadores). Atualmente, ela está normatizada pela ISO e NIST, onde sua arquitetura base é composta pelos modelos de implantação (público, privado, híbrido e comunitário) e modelos de serviço (IaaS - Infrastructure as a Service, PaaS - Platform as a Service e SaaS - Software as a Service). As principais características da computação em nuvem é a alocação de recursos sob demanda e o pagamento apenas dos recursos que o usuário utilizou. Além de ser um modelo de negócio interessante para empresas, a computação em nuvem tem sido uma alternativa simples e econômica para a maior parte dos datacenters privados. Junto a isso, emergiram algumas ferramentas que permitem o gerenciamento desta infraestrutura (e.g., OpenStack, CloudStack e OpenNebula). Um aspecto importante em ambientes de nuvem é a alta disponibilidade de aplicações executadas na infraestrutura. Diversas aplicações não podem sofrer com interrupções ou falhas enquanto executadas. Por isso, técnicas de redundância e tolerância a falhas são implantadas no nível da infraestrutura e nas aplicações. Nas infraestruturas de nuvem, existe uma crescente demanda para manter online servidores e máquinas virtuais, porém, nas ferramentas open source de IaaS é notável os desafios para alta disponibilidade dos ambientes. Neste minicurso, o objetivo é introduzir o tema computação em nuvem no modelo IaaS usando a ferramenta de gerenciamento de código aberto CloudStack. O minicurso tem enfoque prático, ao implantar uma nuvem privada usando CloudStack e evidenciar as funcionalidades presentes na ferramenta para alta disponibilidade em ambientes de nuvem. Espera-se que ao final do curso, o participante consiga implantar um ambiente de nuvem usando a ferramenta e tenha conhecimento sobre a importância, funcionalidades e desafios para alta disponibilidade em ambientes de produção. |
Leiria, Raul; Vogel, Adriano; Griebler, Dalvan; Schepke, Claudio Uma Proposta para o Monitoramento Energético de Nuvens Computacionais Privadas no Zabbix Inproceedings 15th Escola Regional de Redes de Computadores (ERRC), pp. 1-4, Sociedade Brasileira de Computação, Santa Maria, BR, 2017. Resumo | Links | BibTeX | Tags: Cloud computing @inproceedings{larcc:zabbix_energy_cloud:ERRC:17, title = {Uma Proposta para o Monitoramento Energético de Nuvens Computacionais Privadas no Zabbix}, author = {Raul Leiria and Adriano Vogel and Dalvan Griebler and Claudio Schepke}, url = {http://larcc.setrem.com.br/wp-content/uploads/2018/02/CR_ERRC_Leiria_2017.pdf}, year = {2017}, date = {2017-09-01}, booktitle = {15th Escola Regional de Redes de Computadores (ERRC)}, pages = {1-4}, publisher = {Sociedade Brasileira de Computação}, address = {Santa Maria, BR}, abstract = {In the last years, cloud computing has been consolidated as a new computational paradigm due to its widespread adoption. Proportionally, this leverages to the increasing of the power consumption by data centers. As a consequence, there is a witnessed about the growing demand for monitoring the power consumption on computational infrastructures. Therefore, in this work is proposed a mechanism to monitor the cloud computing power draw through the Zabbix open-source networking monitoring tool.}, keywords = {Cloud computing}, pubstate = {published}, tppubtype = {inproceedings} } In the last years, cloud computing has been consolidated as a new computational paradigm due to its widespread adoption. Proportionally, this leverages to the increasing of the power consumption by data centers. As a consequence, there is a witnessed about the growing demand for monitoring the power consumption on computational infrastructures. Therefore, in this work is proposed a mechanism to monitor the cloud computing power draw through the Zabbix open-source networking monitoring tool. |
Rockenbach, Dinei A; Anderle, Nadine Análise e Avaliação Comparativa do Desempenho de Bancos de Dados NoSQL Undergraduate Thesis Undergraduate Thesis, 2017. Resumo | Links | BibTeX | Tags: Databases, NoSQL databases @misc{larcc:dinei_nadine:TCC:17, title = {Análise e Avaliação Comparativa do Desempenho de Bancos de Dados NoSQL}, author = {Dinei A Rockenbach and Nadine Anderle}, url = {http://larcc.setrem.com.br/wp-content/uploads/2018/02/dinei_nadine_TCC_2017.pdf}, year = {2017}, date = {2017-07-01}, address = {Três de Maio, RS, Brazil}, school = {Sociedade Educacional Três de Maio (SETREM)}, abstract = {The NoSQL databases were created to overcome the relational databases limitations. With the increase in their popularity there was an expansion in the number of systems available, the decision-making process to decide which option best fits the enterprise needs may be harder because of the amount of options. The goal of this work is to perform a comparative study of 4 categories of NoSQL databases: key-value, column family, document oriented, and graph or triple. The approach included deductive and quali-quantitative tests, because it departs from an abstract entity to create a bibliographical study of technologies, also it was made a statistical analysis of data. It was used the observation technique, experimentations and documentation, to report the results. The theoretical foundation was created based on bibliographical research and the results are demonstrated through a survey and through a statistical analysis of the studied tools. By then, the work contributed to an analysis of different databases and their performance, the results shown qualitatively and quantitatively the characteristics and point out to the main advantages. It also became verified the importance of scientific research, not only to the community but also to the society in general, everyone that uses these technologies. At the end it was possible to highlight the performance of Couchbase and Aerospike databases for the tested workload and infrastructure.}, howpublished = {Undergraduate Thesis}, keywords = {Databases, NoSQL databases}, pubstate = {published}, tppubtype = {misc} } The NoSQL databases were created to overcome the relational databases limitations. With the increase in their popularity there was an expansion in the number of systems available, the decision-making process to decide which option best fits the enterprise needs may be harder because of the amount of options. The goal of this work is to perform a comparative study of 4 categories of NoSQL databases: key-value, column family, document oriented, and graph or triple. The approach included deductive and quali-quantitative tests, because it departs from an abstract entity to create a bibliographical study of technologies, also it was made a statistical analysis of data. It was used the observation technique, experimentations and documentation, to report the results. The theoretical foundation was created based on bibliographical research and the results are demonstrated through a survey and through a statistical analysis of the studied tools. By then, the work contributed to an analysis of different databases and their performance, the results shown qualitatively and quantitatively the characteristics and point out to the main advantages. It also became verified the importance of scientific research, not only to the community but also to the society in general, everyone that uses these technologies. At the end it was possible to highlight the performance of Couchbase and Aerospike databases for the tested workload and infrastructure. |
Maliszewski, Anderson M; Baum, Willian Performance Characterizations of IaaS Private Clouds for Scientific and Enterprise Workloads Undergraduate Thesis Undergraduate Thesis, 2017. Resumo | Links | BibTeX | Tags: @misc{larcc:anderson_willian:TCC:17, title = {Performance Characterizations of IaaS Private Clouds for Scientific and Enterprise Workloads}, author = {Anderson M Maliszewski and Willian Baum}, url = {http://larcc.setrem.com.br/wp-content/uploads/2018/02/anderson_willian_TCC_2017.pdf}, year = {2017}, date = {2017-07-01}, address = {Três de Maio, RS, Brazil}, institution = {Sociedade Educacional Três de Maio (SETREM)}, school = {Sociedade Educacional Três de Maio (SETREM)}, abstract = {Private IaaS clouds offer an attractive environment to be used in enterprise and scientific fields providing advantages such as scalability, security and avoids dependence on third parties. However, one of the challenges is to port applications to the cloud environment without compromising performance. In response to this, the goal of this text is to characterize the applications performance in private IaaS clouds using scientific and enterprise applications. Therefore, the CloudStack was used to manage clouds and KVM and LXC-based were deployed as virtualization technologies. To represent real-world applications from the scientific and enterprise fields, it was used the NPB-OMP and PARSEC suite. These applications were benchmarked to characterize the high-performance and multi-tenancy environment. Statistical method was used to verify if there were significant differences among the clouds in each proposed environment. The results reveals that scientific and enterprise workloads are statistically different in the majority of the experiments performed in KVM and LXC-based clouds, however there are results with non-significant differences.}, howpublished = {Undergraduate Thesis}, keywords = {}, pubstate = {published}, tppubtype = {misc} } Private IaaS clouds offer an attractive environment to be used in enterprise and scientific fields providing advantages such as scalability, security and avoids dependence on third parties. However, one of the challenges is to port applications to the cloud environment without compromising performance. In response to this, the goal of this text is to characterize the applications performance in private IaaS clouds using scientific and enterprise applications. Therefore, the CloudStack was used to manage clouds and KVM and LXC-based were deployed as virtualization technologies. To represent real-world applications from the scientific and enterprise fields, it was used the NPB-OMP and PARSEC suite. These applications were benchmarked to characterize the high-performance and multi-tenancy environment. Statistical method was used to verify if there were significant differences among the clouds in each proposed environment. The results reveals that scientific and enterprise workloads are statistically different in the majority of the experiments performed in KVM and LXC-based clouds, however there are results with non-significant differences. |
Rista, Cassiano; Griebler, Dalvan; Maron, Carlos A F; Fernandes, Luiz Gustavo Improving the Network Performance of a Container-Based Cloud Environment for Hadoop Systems Inproceedings doi International Conference on High Performance Computing & Simulation (HPCS), pp. 619-626, IEEE, Genoa, Italy, 2017. Resumo | Links | BibTeX | Tags: @inproceedings{larcc:link_aggregation:HPCS:2017, title = {Improving the Network Performance of a Container-Based Cloud Environment for Hadoop Systems}, author = {Cassiano Rista and Dalvan Griebler and Carlos A F Maron and Luiz Gustavo Fernandes}, url = {https://ieeexplore.ieee.org/document/8035136/}, doi = {10.1109/HPCS.2017.97}, year = {2017}, date = {2017-07-01}, booktitle = {International Conference on High Performance Computing & Simulation (HPCS)}, pages = {619-626}, publisher = {IEEE}, address = {Genoa, Italy}, abstract = {Cloud computing has emerged as an important paradigm to improve resource utilization, efficiency, flexibility, and the pay-per-use billing structure. However, cloud platforms cause performance degradations due to their virtualization layer and may not be appropriate for the requirements of high-performance applications, such as big data. This paper tackles the problem of improving network performance in container-based cloud instances to create a viable alternative to run network intensive Hadoop applications. Our approach consists of deploying link aggregation via the IEEE 802.3ad standard to increase the available bandwidth and using LXC (Linux Container) cloud instances to create a Hadoop cluster. In order to evaluate the efficiency of our approach and the overhead added by the container-based cloud environment, we ran a set of experiments to measure throughput, latency, bandwidth utilization, and completion times. The results prove that our approach adds minimal overhead in cloud environment as well as increases throughput and reduces latency. Moreover, our approach demonstrates a suitable alternative for running Hadoop applications, reducing completion times up to 33.73%}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Cloud computing has emerged as an important paradigm to improve resource utilization, efficiency, flexibility, and the pay-per-use billing structure. However, cloud platforms cause performance degradations due to their virtualization layer and may not be appropriate for the requirements of high-performance applications, such as big data. This paper tackles the problem of improving network performance in container-based cloud instances to create a viable alternative to run network intensive Hadoop applications. Our approach consists of deploying link aggregation via the IEEE 802.3ad standard to increase the available bandwidth and using LXC (Linux Container) cloud instances to create a Hadoop cluster. In order to evaluate the efficiency of our approach and the overhead added by the container-based cloud environment, we ran a set of experiments to measure throughput, latency, bandwidth utilization, and completion times. The results prove that our approach adds minimal overhead in cloud environment as well as increases throughput and reduces latency. Moreover, our approach demonstrates a suitable alternative for running Hadoop applications, reducing completion times up to 33.73% |
Rockenbach, Dinei A; Anderle, Nadine; Griebler, Dalvan; Souza, Samuel Estudo Comparativo de Banco de Dados Chave-Valor com Armazenamento em Memória Inproceedings 13th Escola Regional de Banco de Dados (ERBD), pp. 1-4, Sociedade Brasileira de Computação, Passo Fundo, BR, 2017. Resumo | Links | BibTeX | Tags: Databases, NoSQL databases @inproceedings{larcc:database_keyvalue:ERBD:17, title = {Estudo Comparativo de Banco de Dados Chave-Valor com Armazenamento em Memória}, author = {Dinei A Rockenbach and Nadine Anderle and Dalvan Griebler and Samuel Souza}, url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/ANDERLE_ERBD_2017.pdf}, year = {2017}, date = {2017-04-01}, booktitle = {13th Escola Regional de Banco de Dados (ERBD)}, pages = {1-4}, publisher = {Sociedade Brasileira de Computação}, address = {Passo Fundo, BR}, abstract = {Key-value databases emerge to address relational databases' limitations and with the increasing capacity of RAM memory it is possible to offer greater performance and versatility in data storage and processing. The objective is to perform a comparative study of key-value databases with memory storage Redis, Memcached, Voldemort, Aerospike, Hazelcast and Riak KV. Thus, the work contributed to an analysis of different databases and with results that qualitatively demonstrated the characteristics and pointed out the main advantages.}, keywords = {Databases, NoSQL databases}, pubstate = {published}, tppubtype = {inproceedings} } Key-value databases emerge to address relational databases' limitations and with the increasing capacity of RAM memory it is possible to offer greater performance and versatility in data storage and processing. The objective is to perform a comparative study of key-value databases with memory storage Redis, Memcached, Voldemort, Aerospike, Hazelcast and Riak KV. Thus, the work contributed to an analysis of different databases and with results that qualitatively demonstrated the characteristics and pointed out the main advantages. |
Baum, Willian; Maron, Carlos A F; Griebler, Dalvan; Schepke, Claudio Caracterização do Desempenho de Aplicações Pipeline em Instâncias KVM e LXC de uma Nuvem CloudStack Inproceedings 17th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 267-270, Sociedade Brasileira de Computação, Ijuí, RS, Brazil, 2017. Resumo | Links | BibTeX | Tags: @inproceedings{hiperfcloud:parsec_pipeline:ERAD:17, title = {Caracterização do Desempenho de Aplicações Pipeline em Instâncias KVM e LXC de uma Nuvem CloudStack}, author = {Willian Baum and Carlos A F Maron and Dalvan Griebler and Claudio Schepke}, url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/BAUM_ERAD_2017.pdf}, year = {2017}, date = {2017-04-01}, booktitle = {17th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)}, pages = {267-270}, publisher = {Sociedade Brasileira de Computação}, address = {Ijuí, RS, Brazil}, abstract = {Nuvens computacionais são uma alternativa para a computação de alto desempenho. Este artigo avalia o desempenho de aplicações estruturadas com o padrão pipeline em uma implantação de nuvem CloudStack com instâncias do tipo LXC e KVM. Foram testadas as aplicações Ferret e Dedup da suíte PARSEC, bem como constatado uma diferença significativa no Dedup. Na média geral, para esta aplicação a instância LXC é 40,19% melhor que a KVM.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Nuvens computacionais são uma alternativa para a computação de alto desempenho. Este artigo avalia o desempenho de aplicações estruturadas com o padrão pipeline em uma implantação de nuvem CloudStack com instâncias do tipo LXC e KVM. Foram testadas as aplicações Ferret e Dedup da suíte PARSEC, bem como constatado uma diferença significativa no Dedup. Na média geral, para esta aplicação a instância LXC é 40,19% melhor que a KVM. |
Maliszewski, Anderson M; Vogel, Adriano; Griebler, Dalvan; Schepke, Claudio Desempenho das Operações de Criar e Deletar Instâncias KVM Simultâneas em Nuvens CloudStack e OpenStack Inproceedings 17th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 283-286, Sociedade Brasileira de Computação, Ijuí, RS, Brazil, 2017. Resumo | Links | BibTeX | Tags: @inproceedings{hiperfcloud:management_operations:ERAD:17, title = {Desempenho das Operações de Criar e Deletar Instâncias KVM Simultâneas em Nuvens CloudStack e OpenStack}, author = {Anderson M Maliszewski and Adriano Vogel and Dalvan Griebler and Claudio Schepke}, url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/MALISZEWSKI_ERAD_2017.pdf}, year = {2017}, date = {2017-04-01}, booktitle = {17th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)}, pages = {283-286}, publisher = {Sociedade Brasileira de Computação}, address = {Ijuí, RS, Brazil}, abstract = {Plataformas de gerenciamento IaaS como OpenStack e CloudStack, são implantadas para criação de nuvens privadas. O desempenho é importante pois impacta no tempo de disponibilização de recursos. Este artigo avalia o desempenho do gerenciamento das plataformas. Os resultados mostram uma diferença média de 66,3% na criação de instâncias. Na exclusão das instâncias, houve a diferença de 28,6%, sendo ambos resultados favoráveis ao OpenStack}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Plataformas de gerenciamento IaaS como OpenStack e CloudStack, são implantadas para criação de nuvens privadas. O desempenho é importante pois impacta no tempo de disponibilização de recursos. Este artigo avalia o desempenho do gerenciamento das plataformas. Os resultados mostram uma diferença média de 66,3% na criação de instâncias. Na exclusão das instâncias, houve a diferença de 28,6%, sendo ambos resultados favoráveis ao OpenStack |
Vogel, Adriano; Griebler, Dalvan; Schepke, Claudio; Fernandes, Luiz Gustavo An Intra-Cloud Networking Performance Evaluation on CloudStack Environment Inproceedings doi 25th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp. 5, IEEE, St. Petersburg, Russia, 2017. Resumo | Links | BibTeX | Tags: @inproceedings{larcc:intra-cloud_networking_cloudstack:PDP:17, title = {An Intra-Cloud Networking Performance Evaluation on CloudStack Environment}, author = {Adriano Vogel and Dalvan Griebler and Claudio Schepke and Luiz Gustavo Fernandes}, url = {http://ieeexplore.ieee.org/document/7912689/}, doi = {10.1109/PDP.2017.40}, year = {2017}, date = {2017-03-01}, booktitle = {25th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)}, pages = {5}, publisher = {IEEE}, address = {St. Petersburg, Russia}, abstract = {Infrastructure-as-a-Service (IaaS) is a cloud on-demand commodity built on top of virtualization technologies and managed by IaaS tools. In this scenario, performance is a relevant matter because a set of aspects may impact and increase the system overhead.Specific on the network, the use of virtualized capabilities may cause performance degradation (eg.,latency, throughput). The goal of this paper is to contribute to networking performance evaluation, providing new insights for private IaaS clouds. To achieve our goal, we deploy CloudStack environments and conduct experiments with different configurations and techniques. The research findings demonstrate that KVM-based cloud instances have small network performance degradation regarding throughput (about 0.2% for coarse-grained and 6.8% for fine-grained messages) while container-based instances have even better results. On the other hand, the KVM instances present worst latency (about 12.4% on coarse-grained and two times more on fine-grained messages w.r.t. native environment) and better in container-based instances, where the performance results are close to the native environment. Furthermore, we demonstrate a performance optimization of applications running on KVM.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Infrastructure-as-a-Service (IaaS) is a cloud on-demand commodity built on top of virtualization technologies and managed by IaaS tools. In this scenario, performance is a relevant matter because a set of aspects may impact and increase the system overhead.Specific on the network, the use of virtualized capabilities may cause performance degradation (eg.,latency, throughput). The goal of this paper is to contribute to networking performance evaluation, providing new insights for private IaaS clouds. To achieve our goal, we deploy CloudStack environments and conduct experiments with different configurations and techniques. The research findings demonstrate that KVM-based cloud instances have small network performance degradation regarding throughput (about 0.2% for coarse-grained and 6.8% for fine-grained messages) while container-based instances have even better results. On the other hand, the KVM instances present worst latency (about 12.4% on coarse-grained and two times more on fine-grained messages w.r.t. native environment) and better in container-based instances, where the performance results are close to the native environment. Furthermore, we demonstrate a performance optimization of applications running on KVM. |
Maron, Carlos A F; Vogel, Adriano; Griebler, Dalvan Caracterizando o Desempenho de Rede e Aplicações Pipeline em Ambientes de Nuvem Privada Technical Report doi Laboratory of Advanced Research on Cloud Computing (LARCC) 2017. @techreport{larcc:rt3:17, title = {Caracterizando o Desempenho de Rede e Aplicações Pipeline em Ambientes de Nuvem Privada}, author = {Carlos A F Maron and Adriano Vogel and Dalvan Griebler}, url = {http://larcc.setrem.com.br/wp-content/uploads/2018/03/LARCC_HiPerfCloud_RT3_2016.pdf}, doi = {10.13140/RG.2.2.10821.29922/1}, year = {2017}, date = {2017-01-01}, institution = {Laboratory of Advanced Research on Cloud Computing (LARCC)}, keywords = {}, pubstate = {published}, tppubtype = {techreport} } |
2016 |
Maron, Carlos A F; Griebler, Dalvan; Schepke, Claudio; Fernandes, Luiz Gustavo Desempenho de OpenStack e OpenNebula em Estações de Trabalho: Uma Avaliação com Microbenchmarks e NPB Journal Article doi Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC), 1 (6), pp. 15, 2016. Resumo | Links | BibTeX | Tags: @article{larcc:nas_workstations:REABTIC:16, title = {Desempenho de OpenStack e OpenNebula em Estações de Trabalho: Uma Avaliação com Microbenchmarks e NPB}, author = {Carlos A F Maron and Dalvan Griebler and Claudio Schepke and Luiz Gustavo Fernandes}, url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/MARON_REABTIC_2016.pdf}, doi = {10.5281/zenodo.345597}, year = {2016}, date = {2016-12-01}, journal = {Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC)}, volume = {1}, number = {6}, pages = {15}, publisher = {SETREM}, address = {Três de Maio, Brazil}, abstract = {IaaS (Infrastructure as a Service) clouds provide on-demand computing resources (i.e, memory, networking, storage and processing unit) for running applications. Studies that evaluate the IaaS cloud performance are limited to the virtualization layer and ignore the impact of management tools analysis. In contrast, our research investigates the impact of them in order to identify if there are influences or differences between OpenStack and OpenNebula. We used intensive workloads (microbenchmarks) and scientific parallel applications. Statistically, the results demonstrated that OpenNebula was 11.07% better using microbenchmarks and 8.41% with scientific parallel applications.}, keywords = {}, pubstate = {published}, tppubtype = {article} } IaaS (Infrastructure as a Service) clouds provide on-demand computing resources (i.e, memory, networking, storage and processing unit) for running applications. Studies that evaluate the IaaS cloud performance are limited to the virtualization layer and ignore the impact of management tools analysis. In contrast, our research investigates the impact of them in order to identify if there are influences or differences between OpenStack and OpenNebula. We used intensive workloads (microbenchmarks) and scientific parallel applications. Statistically, the results demonstrated that OpenNebula was 11.07% better using microbenchmarks and 8.41% with scientific parallel applications. |
Leiria, Raul; Schepke, Claudio; de Mello, Aline Vieira; Griebler, Dalvan Um Monitor de Consumo Energético para Computação em Nuvem na Ferramenta OpenNebula Inproceedings 17th Simpósio de Sistemas Computacionais de Alto Desempenho (WSCAD), pp. 134-145, Sociedade Brasileira de Computação (SBC), Aracaju, Sergipe, 2016. Resumo | Links | BibTeX | Tags: @inproceedings{larcc:energy_opennebula:WSCAD:16, title = {Um Monitor de Consumo Energético para Computação em Nuvem na Ferramenta OpenNebula}, author = {Raul Leiria and Claudio Schepke and Aline Vieira de Mello and Dalvan Griebler}, url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/LEIRIA_WSCAD_2016.pdf}, year = {2016}, date = {2016-10-01}, booktitle = {17th Simpósio de Sistemas Computacionais de Alto Desempenho (WSCAD)}, pages = {134-145}, publisher = {Sociedade Brasileira de Computação (SBC)}, address = {Aracaju, Sergipe}, abstract = {Computational clouds consume a lot of energy and are responsible for causing the global emission of at least 2% of carbon dioxide. Current cloud management tools do not have resources for monitoring the energy consumption of their infrastructures as well as any information on electricity demand, which is an integral part of the cloud’s maintenance cost. Therefore, our paper proposes a model for monitoring the electrical consumption in computational clouds. We created an add-on named Monitor Energetico (ME) for monitoring energy consumption in data centers virtualized with Kernel-based Virtual Machine and managed by OpenNebula. The experiments were performed using Sysbench tool to stress our environment, where results proved our tool works well and has an intuitive monitoring visualization.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Computational clouds consume a lot of energy and are responsible for causing the global emission of at least 2% of carbon dioxide. Current cloud management tools do not have resources for monitoring the energy consumption of their infrastructures as well as any information on electricity demand, which is an integral part of the cloud’s maintenance cost. Therefore, our paper proposes a model for monitoring the electrical consumption in computational clouds. We created an add-on named Monitor Energetico (ME) for monitoring energy consumption in data centers virtualized with Kernel-based Virtual Machine and managed by OpenNebula. The experiments were performed using Sysbench tool to stress our environment, where results proved our tool works well and has an intuitive monitoring visualization. |
Maron, Carlos A F; Vogel, Adriano; Benedetti, Vera L L; Shubeita, Fauzi; Schepke, Claudio; Griebler, Dalvan Panorama Geral e Resultados do Projeto HiPerfCloud Inproceedings 15th Jornada de Pesquisa SETREM, pp. 4, SETREM, Três de Maio, Brazil, 2016. Resumo | Links | BibTeX | Tags: @inproceedings{larcc:hiperfcloud:JP:16, title = {Panorama Geral e Resultados do Projeto HiPerfCloud}, author = {Carlos A F Maron and Adriano Vogel and Vera L L Benedetti and Fauzi Shubeita and Claudio Schepke and Dalvan Griebler}, url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/HiPerfCloud_JP_SETREM_2016.pdf}, year = {2016}, date = {2016-10-01}, booktitle = {15th Jornada de Pesquisa SETREM}, pages = {4}, publisher = {SETREM}, address = {Três de Maio, Brazil}, abstract = {O projeto HiPerfCloud, em andamento no LARCC da Faculdade SETREM, desenvolve pesquisas em nível de infraestrutura em nuvens computacionais. O objetivo do projeto é analisar o impacto que aplicações científicas de alto desempenho sofrem quando executadas em nuvens privadas e avaliar as tecnologias de implantação envolvidas. As publicações de artigos em eventos nacionais e internacionais do projeto tem colaborado com o estado da arte da área. As descobertas recentes apontaram que aspectos de infraestrutura, rede e virtualização, exercem influência no desempenho de aplicações executadas em nuvem, enquanto as ferramentas de IaaS possuem contrastes em relação ao gerenciamento (escalonamento, disponibilidade, segurança).}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } O projeto HiPerfCloud, em andamento no LARCC da Faculdade SETREM, desenvolve pesquisas em nível de infraestrutura em nuvens computacionais. O objetivo do projeto é analisar o impacto que aplicações científicas de alto desempenho sofrem quando executadas em nuvens privadas e avaliar as tecnologias de implantação envolvidas. As publicações de artigos em eventos nacionais e internacionais do projeto tem colaborado com o estado da arte da área. As descobertas recentes apontaram que aspectos de infraestrutura, rede e virtualização, exercem influência no desempenho de aplicações executadas em nuvem, enquanto as ferramentas de IaaS possuem contrastes em relação ao gerenciamento (escalonamento, disponibilidade, segurança). |
Vogel, Adriano; Leiria, Raul; Schepke, Claudio; Griebler, Dalvan Nuvem Privada com OpenNebula: da Implantação ao Desenvolvimento Short Course Short Course, 2016. Resumo | Links | BibTeX | Tags: @misc{larcc:course:OpenNebula:ERRC:16, title = {Nuvem Privada com OpenNebula: da Implantação ao Desenvolvimento}, author = {Adriano Vogel and Raul Leiria and Claudio Schepke and Dalvan Griebler}, url = {http://larcc.setrem.com.br/wp-content/uploads/2018/03/ERRC_2016_OpenNebula.pdf}, year = {2016}, date = {2016-09-01}, address = {Porto Alegre, RS, Brazil}, abstract = {Computação em nuvem é um paradigma que vem se consolidando com o surgimento de novas tecnologias. Os modelos de serviço deste paradigma são definidos como IaaS (Infrastructure as a Service), PaaS (Platform as a Service) e SaaS (Software as a Service) e são definidos pelo NIST (National Institute of Standards and Technology) para distinguir as camadas de software e serviço. Atualmente, a grande maioria das aplicações executam sobre uma infraestrutura de nuvem, sustentadas pela camada de mais baixo nível que é o IaaS. Um ambiente pode ser público--onde usuários podem contratar recursos computacionais como um serviço--e privada--onde a nuvem é mantida no domínio da empresa para serviços internos. Neste minicurso, o objetivo é introduzir o tema computação em nuvem no modelo IaaS usando a ferramenta de gerenciamento de código aberto OpenNebula. Esta ferramenta foi uma das pioneiras na comunidade software livre para IaaS e tem como principal característica a simplicidade e desempenho. OpenNebula tem tido boa aceitação no meio acadêmico pela rapidez na implantação e possibilidade de estender novas funcionalidades. O minicurso tem enfoque prático, com a implantação de uma nuvem privada usando OpenNebula e a demonstração de um exemplo de desenvolvimento de plugins. Espera-se que ao final do curso, o participante consiga implantar uma nuvem privada usando OpenNebula e ter uma visão geral da arquitetura do software para poder customizar e criar novos plugins, melhorando o gerenciamento da nuvem.}, howpublished = {Short Course}, keywords = {}, pubstate = {published}, tppubtype = {misc} } Computação em nuvem é um paradigma que vem se consolidando com o surgimento de novas tecnologias. Os modelos de serviço deste paradigma são definidos como IaaS (Infrastructure as a Service), PaaS (Platform as a Service) e SaaS (Software as a Service) e são definidos pelo NIST (National Institute of Standards and Technology) para distinguir as camadas de software e serviço. Atualmente, a grande maioria das aplicações executam sobre uma infraestrutura de nuvem, sustentadas pela camada de mais baixo nível que é o IaaS. Um ambiente pode ser público--onde usuários podem contratar recursos computacionais como um serviço--e privada--onde a nuvem é mantida no domínio da empresa para serviços internos. Neste minicurso, o objetivo é introduzir o tema computação em nuvem no modelo IaaS usando a ferramenta de gerenciamento de código aberto OpenNebula. Esta ferramenta foi uma das pioneiras na comunidade software livre para IaaS e tem como principal característica a simplicidade e desempenho. OpenNebula tem tido boa aceitação no meio acadêmico pela rapidez na implantação e possibilidade de estender novas funcionalidades. O minicurso tem enfoque prático, com a implantação de uma nuvem privada usando OpenNebula e a demonstração de um exemplo de desenvolvimento de plugins. Espera-se que ao final do curso, o participante consiga implantar uma nuvem privada usando OpenNebula e ter uma visão geral da arquitetura do software para poder customizar e criar novos plugins, melhorando o gerenciamento da nuvem. |
Barth, Andréia; Wolfer, Camila; Lovato, Adalberto; Griebler, Dalvan Avaliação da Irradiação Solar como Fonte de Energia Renovável no Noroeste do Estado do Rio Grande do Sul Através de Uma Rede Neural Journal Article doi Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC), 1 (5), pp. 15, 2016. Resumo | Links | BibTeX | Tags: @article{larcc:neural_netoworks:REABTIC:16, title = {Avaliação da Irradiação Solar como Fonte de Energia Renovável no Noroeste do Estado do Rio Grande do Sul Através de Uma Rede Neural}, author = {Andréia Barth and Camila Wolfer and Adalberto Lovato and Dalvan Griebler}, url = {http://larcc.setrem.com.br/wp-content/uploads/2018/02/ANDREIA_CAMILA_REABTIC_2016.pdf}, doi = {10.5281/zenodo.345585}, year = {2016}, date = {2016-08-01}, journal = {Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC)}, volume = {1}, number = {5}, pages = {15}, publisher = {SETREM}, address = {Três de Maio, RS, Brazil}, abstract = {Solar irradiation is one of the cleanest renewable energy sources of nowadays. In this work, the goal was to implement a neural network capable of evaluating the solar irradiation in the Northwest region of Rio Grande do Sul. In case, this assessment targets meteorological data, from January to April 2015. The network Perceptron was implemented and trained using MATLAB software. The results have indicated that the system obtained a highly accurate and that the region is a good enough place for stemmed energy production of solar irradiation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Solar irradiation is one of the cleanest renewable energy sources of nowadays. In this work, the goal was to implement a neural network capable of evaluating the solar irradiation in the Northwest region of Rio Grande do Sul. In case, this assessment targets meteorological data, from January to April 2015. The network Perceptron was implemented and trained using MATLAB software. The results have indicated that the system obtained a highly accurate and that the region is a good enough place for stemmed energy production of solar irradiation. |
Barth, Andréia; Wolfer, Camila Aplicação de Redes Neurais na Avaliação da Irradiação Solar como Fonte de Energia Renovável Undergraduate Thesis Undergraduate Thesis, 2016. Resumo | Links | BibTeX | Tags: @misc{larcc:andreia_camila:TCC:16, title = {Aplicação de Redes Neurais na Avaliação da Irradiação Solar como Fonte de Energia Renovável}, author = {Andréia Barth and Camila Wolfer}, url = {http://larcc.setrem.com.br/wp-content/uploads/2018/02/andreia_camila_TCC_2016.pdf}, year = {2016}, date = {2016-08-01}, address = {Três de Maio, RS, Brazil}, school = {Sociedade Educacional Três de Maio (SETREM)}, abstract = {In consequence of technological advances and human interference with the global climate scenario, seek becoming more alternatives that meet the need of energy generated by billions of people. The stemmed energy from the sun renews itself naturally through its cycle, and is used as a sustainable way, aiming at minimal impact to the environment. In Brazil, the solar energy is one of the most promising energy options since most of its territory receives high solar radiation throughout the year. In this view, the subject of this study arises, which is focused on the application of neural networks in the evaluation of solar radiation, as a source of renewable energy. The work, which aims to assess the availability of stemmed energy of solar radiation, seeks to answer the following problem: The existing solar irradiation in the northwest of Rio Grande do Sul is suitable for use as a source of renewable energy? To aid in the resolution of this questioning, it was made use of some methods of approach, as critical realism and the quantitative approach was used the measurement of the environment data necessary for the study. As methods of procedure was done using literature in order to seek knowledge on the studied areas, and data collection, used to collect the information stemmed meteorological station SETREM. The architecture of the neural network Perceptron designed, and brought effective results, an input layer containing a vector of 8 elements, a hidden layer of 20 neurons, the algorithm used Levenberg-Marquardt for network training, purelin and transfer functions log-sigmoid and an output layer. The structured network proved to be flexible in the use of data, can be easily altered without substantial impact on its functionality. Besides that, after analysis of the results, it is concluded that the Rio Grande do Sul state Northwest region would have been able to generate 1,211 kW/h of solar energy only in the summer season, that is, an Power 750 kW for a plant that It occupies an area of one hectare, since it has good predictions of solar irradiation, essential for this activity.}, howpublished = {Undergraduate Thesis}, keywords = {}, pubstate = {published}, tppubtype = {misc} } In consequence of technological advances and human interference with the global climate scenario, seek becoming more alternatives that meet the need of energy generated by billions of people. The stemmed energy from the sun renews itself naturally through its cycle, and is used as a sustainable way, aiming at minimal impact to the environment. In Brazil, the solar energy is one of the most promising energy options since most of its territory receives high solar radiation throughout the year. In this view, the subject of this study arises, which is focused on the application of neural networks in the evaluation of solar radiation, as a source of renewable energy. The work, which aims to assess the availability of stemmed energy of solar radiation, seeks to answer the following problem: The existing solar irradiation in the northwest of Rio Grande do Sul is suitable for use as a source of renewable energy? To aid in the resolution of this questioning, it was made use of some methods of approach, as critical realism and the quantitative approach was used the measurement of the environment data necessary for the study. As methods of procedure was done using literature in order to seek knowledge on the studied areas, and data collection, used to collect the information stemmed meteorological station SETREM. The architecture of the neural network Perceptron designed, and brought effective results, an input layer containing a vector of 8 elements, a hidden layer of 20 neurons, the algorithm used Levenberg-Marquardt for network training, purelin and transfer functions log-sigmoid and an output layer. The structured network proved to be flexible in the use of data, can be easily altered without substantial impact on its functionality. Besides that, after analysis of the results, it is concluded that the Rio Grande do Sul state Northwest region would have been able to generate 1,211 kW/h of solar energy only in the summer season, that is, an Power 750 kW for a plant that It occupies an area of one hectare, since it has good predictions of solar irradiation, essential for this activity. |
Pieper, Ricardo; Griebler, Dalvan; Lovato, Adalberto Towards a Software as a Service for Biodigestor Analytics Journal Article doi Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC), 1 (5), pp. 15, 2016. Resumo | Links | BibTeX | Tags: @article{larcc:saas_analytics:REABTIC:16, title = {Towards a Software as a Service for Biodigestor Analytics}, author = {Ricardo Pieper and Dalvan Griebler and Adalberto Lovato}, url = {http://larcc.setrem.com.br/wp-content/uploads/2017/04/PIEPER_REABTIC_2016.pdf}, doi = {10.5281/zenodo.345587}, year = {2016}, date = {2016-08-01}, journal = {Revista Eletrônica Argentina-Brasil de Tecnologias da Informação e da Comunicação (REABTIC)}, volume = {1}, number = {5}, pages = {15}, publisher = {SETREM}, address = {Três de Maio, Brazil}, abstract = {The field of machine learning is becoming even more important in the last years. The ever-increasing amount of data and complexity of computational problems challenges the currently available technology. Meanwhile, anaerobic digesters represent a good alternative for renewable energy production in Brazil. However, performing efficient and accurate predictions/analytics while completely abstracting machine learning details from end-users might not be a simple task to achieve. Usually, such tools are made for a specific scenario and may not fit with particular and general needs. Our goal was to create a SaaS for biogas data analytics by using a neural network. Therefore, an open source, cloud-enabled SaaS (Software as a Service) was developed and deployed in LARCC (Laboratory of Advanced Researches on Cloud Computing) at SETREM. The results have shown the SaaS application is able to perform predictions. The neural network's accuracy is not significantly worse than a state-of-the-art implementation, and its training speed is faster. The user interface demonstrates to be intuitive, and the predictions were accurate when providing the training algorithm with sufficient data. In addition, the file processing and network training time were good enough under traditional workload conditions.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The field of machine learning is becoming even more important in the last years. The ever-increasing amount of data and complexity of computational problems challenges the currently available technology. Meanwhile, anaerobic digesters represent a good alternative for renewable energy production in Brazil. However, performing efficient and accurate predictions/analytics while completely abstracting machine learning details from end-users might not be a simple task to achieve. Usually, such tools are made for a specific scenario and may not fit with particular and general needs. Our goal was to create a SaaS for biogas data analytics by using a neural network. Therefore, an open source, cloud-enabled SaaS (Software as a Service) was developed and deployed in LARCC (Laboratory of Advanced Researches on Cloud Computing) at SETREM. The results have shown the SaaS application is able to perform predictions. The neural network's accuracy is not significantly worse than a state-of-the-art implementation, and its training speed is faster. The user interface demonstrates to be intuitive, and the predictions were accurate when providing the training algorithm with sufficient data. In addition, the file processing and network training time were good enough under traditional workload conditions. |
Pieper, Ricardo Anaerobic Digester Analytics: Towards a Smart Software as a Service Undergraduate Thesis Undergraduate Thesis, 2016. Resumo | Links | BibTeX | Tags: @misc{larcc:pieper:TCC:16, title = {Anaerobic Digester Analytics: Towards a Smart Software as a Service}, author = {Ricardo Pieper}, url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/PIEPER_TCC_2015.pdf}, year = {2016}, date = {2016-08-01}, address = {Três de Maio, RS, Brazil}, school = {Sociedade Educacional Três de Maio (SETREM)}, abstract = {The machine learning field is becoming even more important in the last years. The ever-increasing amount of data challenges the current available technology. Meanwhile, anaerobic digesters represent a good alternative for renewable energy production in Brazil. However, performing efficient and accurate predictions/analytics while completely abstracting machine learning details from end-users might not be a simple task to achieve. Usually, such tools are made for a specific scenario and may not fit with particular and general needs in other projects. The thesis goal was to create a SaaS on biogas data analytics by using a neural network. Therefore, an open source, cloud-enabled SaaS (Software as a Service) was developed and deployed in LARCC (Laboratory of Advanced Researches for Cloud Computing) at SETREM. The results have shown the neural network's accuracy is not significantly worse than a state-of-the-art implementation, and its training speed is faster. However, the algorithm is yet to be tested using real world biogas data. The user interface demonstrates to be intuitive, and the predictions with synthetic data were accurate when the training algorithm is provided with good quality data. Also, the file processing and network training time were good enough under traditional workload conditions.}, howpublished = {Undergraduate Thesis}, keywords = {}, pubstate = {published}, tppubtype = {misc} } The machine learning field is becoming even more important in the last years. The ever-increasing amount of data challenges the current available technology. Meanwhile, anaerobic digesters represent a good alternative for renewable energy production in Brazil. However, performing efficient and accurate predictions/analytics while completely abstracting machine learning details from end-users might not be a simple task to achieve. Usually, such tools are made for a specific scenario and may not fit with particular and general needs in other projects. The thesis goal was to create a SaaS on biogas data analytics by using a neural network. Therefore, an open source, cloud-enabled SaaS (Software as a Service) was developed and deployed in LARCC (Laboratory of Advanced Researches for Cloud Computing) at SETREM. The results have shown the neural network's accuracy is not significantly worse than a state-of-the-art implementation, and its training speed is faster. However, the algorithm is yet to be tested using real world biogas data. The user interface demonstrates to be intuitive, and the predictions with synthetic data were accurate when the training algorithm is provided with good quality data. Also, the file processing and network training time were good enough under traditional workload conditions. |
Vogel, Adriano; Maron, Carlos A F; Griebler, Dalvan; Schepke, Claudio Medindo o Desempenho de Implantações de OpenStack, CloudStack e OpenNebula em Aplicações Científicas Inproceedings 16th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS), pp. 279-282, Sociedade Brasileira de Computação, São Leopoldo, RS, Brazil, 2016. Resumo | Links | BibTeX | Tags: @inproceedings{hiperfcloud:nas_all:ERAD:16, title = {Medindo o Desempenho de Implantações de OpenStack, CloudStack e OpenNebula em Aplicações Científicas}, author = {Adriano Vogel and Carlos A F Maron and Dalvan Griebler and Claudio Schepke}, url = {http://larcc.setrem.com.br/wp-content/uploads/2017/03/VOGEL_ERAD_2016.pdf}, year = {2016}, date = {2016-04-01}, booktitle = {16th Escola Regional de Alto Desempenho do Estado do Rio Grande do Sul (ERAD/RS)}, pages = {279-282}, publisher = {Sociedade Brasileira de Computação}, address = {São Leopoldo, RS, Brazil}, abstract = {Ambientes de nuvem possibilitam a execução de aplicações sob demanda e são uma alternativa para aplicações científicas. O desempenho é um dos principais desafios, devido ao uso da virtualização que induz perdas e variações. O objetivo do trabalho foi implantar ambientes de nuvem privada com diferentes ferramentas de IaaS, medindo o desempenho de aplicações paralelas. Consequentemente, os resultados apresentaram poucos contrastes.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Ambientes de nuvem possibilitam a execução de aplicações sob demanda e são uma alternativa para aplicações científicas. O desempenho é um dos principais desafios, devido ao uso da virtualização que induz perdas e variações. O objetivo do trabalho foi implantar ambientes de nuvem privada com diferentes ferramentas de IaaS, medindo o desempenho de aplicações paralelas. Consequentemente, os resultados apresentaram poucos contrastes. |
Vogel, Adriano; Griebler, Dalvan; Maron, Carlos A F; Schepke, Claudio; Fernandes, Luiz Gustavo Private IaaS Clouds: A Comparative Analysis of OpenNebula, CloudStack and OpenStack Inproceedings doi 24th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp. 672-679, IEEE, Heraklion Crete, Greece, 2016. Resumo | Links | BibTeX | Tags: @inproceedings{larcc:IaaS_private:PDP:16, title = {Private IaaS Clouds: A Comparative Analysis of OpenNebula, CloudStack and OpenStack}, author = {Adriano Vogel and Dalvan Griebler and Carlos A F Maron and Claudio Schepke and Luiz Gustavo Fernandes}, url = {http://ieeexplore.ieee.org/document/7445407/}, doi = {10.1109/PDP.2016.75}, year = {2016}, date = {2016-02-01}, booktitle = {24th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)}, pages = {672-679}, publisher = {IEEE}, address = {Heraklion Crete, Greece}, abstract = {Despite the evolution of cloud computing in recent years, the performance and comprehensive understanding of the available private cloud tools are still under research. This paper contributes to an analysis of the Infrastructure as a Service (IaaS) domain by mapping new insights and discussing the challenges for improving cloud services. The goal is to make a comparative analysis of OpenNebula, OpenStack and CloudStack tools, evaluating their differences on support for flexibility and resiliency. Also, we aim at evaluating these three cloud tools when they are deployed using a mutual hypervisor (KVM) for discovering new empirical insights. Our research results demonstrated that OpenStack is the most resilient and CloudStack is the most flexible for deploying an IaaS private cloud. Moreover, the performance experiments indicated some contrasts among the private IaaS cloud instances when running intensive workloads and scientific applications.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Despite the evolution of cloud computing in recent years, the performance and comprehensive understanding of the available private cloud tools are still under research. This paper contributes to an analysis of the Infrastructure as a Service (IaaS) domain by mapping new insights and discussing the challenges for improving cloud services. The goal is to make a comparative analysis of OpenNebula, OpenStack and CloudStack tools, evaluating their differences on support for flexibility and resiliency. Also, we aim at evaluating these three cloud tools when they are deployed using a mutual hypervisor (KVM) for discovering new empirical insights. Our research results demonstrated that OpenStack is the most resilient and CloudStack is the most flexible for deploying an IaaS private cloud. Moreover, the performance experiments indicated some contrasts among the private IaaS cloud instances when running intensive workloads and scientific applications. |