<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Software Engineer in Deep Learning | Bulent Siyah]]></title><description><![CDATA[You can access my work in all areas on my website (www.bulentsiyah.com).
#ComputerVision #DeepLearning]]></description><link>https://www.bulentsiyah.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 07 Apr 2026 20:55:42 GMT</lastBuildDate><atom:link href="https://www.bulentsiyah.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Yapay Zeka ve Büyük Dil Modelleri (LLM’ler) Yönetişimi ve Güvenliği | Softtech 2024 Teknoloji Raporu Yazım]]></title><description><![CDATA[Softtech, Türkiye'nin lider teknoloji şirketlerinden biri olarak her yıl alanında uzman isimlerin görüşlerini içeren Teknoloji Raporu yayımlamaktadır. Bu yılki rapor, üretken yapay zeka teknolojileri odağında hazırlanmış olup; insanı yapay zekadan ay...]]></description><link>https://www.bulentsiyah.com/yapay-zeka-ve-buyuk-dil-modelleri-llmler-yonetisimi-ve-guvenligi-softtech-2024-teknoloji-raporu-yazim</link><guid isPermaLink="true">https://www.bulentsiyah.com/yapay-zeka-ve-buyuk-dil-modelleri-llmler-yonetisimi-ve-guvenligi-softtech-2024-teknoloji-raporu-yazim</guid><category><![CDATA[yapay zeka]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Sat, 04 Jan 2025 21:09:44 GMT</pubDate><content:encoded><![CDATA[<p>Softtech, Türkiye'nin lider teknoloji şirketlerinden biri olarak her yıl alanında uzman isimlerin görüşlerini içeren Teknoloji Raporu yayımlamaktadır. Bu yılki rapor, üretken yapay zeka teknolojileri odağında hazırlanmış olup; insanı yapay zekadan ayıran özellikler, iş dünyası, yapay yaşam, dijital yaşam ve fiziksel yaşam gibi konulara odaklanıyor. Ayrıca, Teknoloji Radarı bölümü, gelecekteki teknolojik gelişmelerle ilgili öngörüler sunarken, yapay zekanın iş gücü piyasasına olan etkilerini de detaylı şekilde inceliyor.</p>
<p>2024 te raporda “<strong>Yapay Zeka ve Büyük Dil Modelleri (LLM’ler) Yönetişimi ve Güvenliği” üzerine</strong> benimde naçizane bir yazım bulunmaktadır. Raporun tamamını <a target="_blank" href="https://softtech.com.tr/2024-softtech-teknoloji-raporu/">https://softtech.com.tr/2024-softtech-teknoloji-raporu/</a> adresinden okuyabilirsiniz. Benim yazım 71. sayfada bulunmaktadır.</p>
<h1 id="heading-yapay-zeka-ve-buyuk-dil-modelleri-llmler-yonetisimi-ve-guvenligi"><strong>Yapay Zeka ve Büyük Dil Modelleri (LLM’ler) Yönetişimi ve Güvenliği</strong></h1>
<p>Yapay zeka ve büyük dil modelleri (LLM'ler), birçok sektörde ve hayatımızın her alanında devrim yaratma potansiyeliyle dünyamızı hızla dönüştürüyor. Ancak bu güç beraberinde büyük bir sorumluluk da getirmektedir. AI ve LLM'lerin sorumlu ve güvenli bir şekilde kullanılmasını sağlamak için etkili yönetişim ve güvenlik önlemlerinin alınması esastır. Yapay zeka ve büyük dil modeli yönetişimi ve güvenliğindeki temel zorlukları ve dikkat edilmesi gereken noktaları inceleyelim. Ayrıca bu alandaki en son gelişmeleri ve uygulamaları tartışalım.</p>
<h2 id="heading-zorluklar-ve-dikkat-edilmesi-gerekenler"><strong>Zorluklar ve Dikkat Edilmesi Gerekenler</strong></h2>
<p>Yapay zeka ve büyük dil modeli yönetişimi ve güvenliğinde bir dizi benzersiz zorluk ve husus vardır. Yapay zeka ve büyük dil modelleri, özellikle de karmaşık sistemlerin nasıl çalıştığını anlamak göz korkutucu bir görev olabilir. Bu şeffaflığın eksikliği, geliştiricilerin ve kullanıcıların eylemlerinden sorumlu tutulmasını zorlaştırabilir. Bu teknolojilerin geliştirilmesi ve kullanılması sırasında şeffaflık ve hesap verebilirlik için açık mekanizmalar oluşturmak çok önemlidir.</p>
<p><img src="https://spectrum.ieee.org/media-library/a-bar-chart-shows-the-scores-of-the-10-companies-ranked-in-stanford-s-ai-transparency-index.jpg?id=50029047&amp;width=1200&amp;height=899" alt="A bar chart shows the scores of the 10 companies ranked in Stanford's AI transparency index. " class="image--center mx-auto" /></p>
<p>[1] 2023 Kaynak Modellerin Şeffaflık Endeksi</p>
<p>AI ve LLM'ler, gerçek dünyanın önyargılarını ve ayrımcılığını yansıtabilecek veriler üzerinde eğitilir. Bu, adil olmayan ve ayrımcı sonuçlara yol açarak mevcut toplumsal eşitsizlikleri güçlendirebilir. Önyargıyı ele almak ve yapay zeka sistemlerinde adaleti sağlamak önemli bir sorumluluktur.</p>
<p>AI ve LLM'ler, bireylerin mahremiyetine ve genel güvenliğine tehdit oluşturan yeni ve güçlü saldırı ve gözetleme biçimleri oluşturmak için kullanılabilir. Potansiyel tehditlere karşı korunmak için şifreleme, erişim kontrolü ve denetim dahil olmak üzere güçlü güvenlik önlemleri gerekmektedir.</p>
<p>AI ve LLM'ler, demokrasi ve sosyal uyum üzerinde yıkıcı etkileri olabilecek gerçekçi ancak yanlış bilgiler üretmek için kullanılabilir. Yanlış bilgilerin ve dezenformasyonun yayılmasının önlenmesi, yapay zeka yönetişiminde kritik bir endişe kaynağıdır.</p>
<h2 id="heading-en-son-gelismeler-ve-en-iyi-uygulamalar"><strong>En Son Gelişmeler ve En İyi Uygulamalar</strong></h2>
<p>Bir dizi kuruluş ve girişim, yapay zeka ve büyük dil modeli yönetişimi ve güvenliğindeki bu zorlukları çözmek için aktif olarak çalışmaktadır. Bu alandaki en son gelişmelerden ve en iyi uygulamalardan bazıları şunlardır:</p>
<p>Birleşmiş Milletler (BM), yapay zeka ile ilişkili risklerin değerlendirilmesi ihtiyacını kabul etmiştir. Bu riskleri ele alan ve yapay zeka yönetimini geliştiren çerçeveler geliştirmek için Google ve Microsoft gibi büyük teknoloji şirketlerinin işbirliği aradı. Google, yapay zeka güvenliği araştırmalarına yatırım yapmaktadır ve hatta yapay zeka güvenliğine odaklanan yeni bir şirket olan Anthropic’i bile kurmuştur. Bu, yapay zeka güvenliği ve güvenliğinin öneminin giderek daha fazla kabul edildiğini gösteriyor.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfkXgcG6yOak4NghVP16YEIttzKk5bvfb_K_GTlJk9cVQKMomk3w2OLaj34PtGktIAFK-f1wt5tqn0t8wzB7gEA6wAPcQmjyVsT1YhbzeM_m4tOf7DHXV4wYnsfAZTOYVZ9D_ruqeUQ4BiD0bGNMUH8YN20iy6HV5STgUX1Qg?key=PaCMrw_AV48r-m_n9mGvGw" alt /></p>
<p>Yapay zeka araştırmalarında lider bir kuruluş olan OpenAI, özellikle yapay zeka ile ilişkili "felaket risklerini" değerlendirmeye adanmış yeni bir ekip kurmuştur. Bu, yapay zeka geliştirmede proaktif risk yönetimi ihtiyacının altını çizmektedir.</p>
<p>Startup'lar, kuruluşların yapay zeka sistemlerini izlemelerine, yönetmelerine ve güvence altına almalarına yardımcı olacak yenilikçi araç ve teknolojilerle ortaya çıkmaktadır. Bu girişimler, yapay zeka güvenlik uygulamalarının geliştirilmesinde çok önemli bir rol oynamaktadır.</p>
<p>Yapay zeka ve büyük dil modeli yönetişimi ve güvenliğine yönelik en iyi uygulamalar, kuruluşların AI ve LLM'leri kullanım şekilleri konusunda şeffaf olmalarını, geliştiricileri ve kullanıcıları eylemlerinden sorumlu tutacak açık politikalar ve prosedürler oluşturmalarını, AI ve LLM sistemlerinde önyargının azaltılmasını ve önyargıyı tespit etmek ve ele almak için çeşitli eğitim verilerinin kullanılmasını içermeli, aynı zamanda güçlü güvenlik ve gizlilik önlemleri olan şifreleme, erişim kontrolü ve sağlam denetim prosedürlerini uygulayarak AI ve LLM sistemlerini korumalarını gerektirir.</p>
<p>Kuruluşların, yapay zeka ve LLM’lerin yanlış bilgi ve dezinformasyon üretme ve yayma amacıyla kötüye kullanılmasını önleyecek politika ve prosedürleri olmalıdır. Çıktıların insanlar tarafından incelenmesi ve doğrulama araçları bu bağlamda değerli olabilir.</p>
<h2 id="heading-cozum"><strong>Çözüm</strong></h2>
<p>Yapay zeka ve LLM'ler, hayatımızın birçok alanında devrim yaratabilme potansiyeline sahiptir. Ancak bunların sorumlu ve güvenli bir şekilde kullanılabilmesini sağlamak için etkili yönetim ve güvenlik önlemleri alınmalıdır. Kuruluşlar şeffaflığa, hesap verebilirliğe, adalete, güvenliğe ve gizliliğe öncelik vermeli ve bu teknolojilerin yanlış bilgi üretmek amacıyla kötüye kullanılmasını önlemek için önlemler almalıdır. Yukarıda tartışılan zorluklara ve konulara ek olarak, AI ve LLM'leri kullanmanın etik sonuçlarını dikkate almak ve bunları insan değerleriyle uyumlu hale getirmek son derece önemlidir. Uzun vadeli toplumsal etkileri düşünmek ve bu teknolojilerin sorumlu ve faydalı kullanımını teşvik eden politikalar ve düzenlemeler geliştirmek de gereklidir. Yapay zeka ve LLM'ler gelişmeye ve yaygın olarak benimsenmeye devam ettikçe, sağlam ve etkili bir yönetişim ve güvenlik çerçevesine sahip olmanın hayati önem taşıdığını unutmamak önemlidir. Bu çerçeve, bu teknolojilerin daha büyük yarar sağlamasını sağlamanın yanı sıra ilgili riskleri de azaltacaktır. Sorumlu yapay zeka geliştirme ve kullanımı, daha adil ve güvenli bir geleceğin şekillenmesine büyük katkıda bulunacaktır.</p>
<h2 id="heading-sonuc"><strong>Sonuç</strong></h2>
<p>Yapay zeka ve büyük dil modelleri (LLM’ler), yaşantımızı kökten değiştirmeye yönelik büyük bir potansiyele sahiptir. Ancak, bu güçle birlikte büyük sorumluluklar gelir. Yapay zeka ve LLM’lerin sorumlu ve güvenli bir şekilde kullanılabilmesi için etkili bir yönetişim ve güvenlik çerçevesine sahip olmak son derece önemlidir. Kuruluşlar, kullanılan yapay zeka ve LLM’lerin nasıl kullanıldığına dair şeffaf olmalı, geliştiricileri ve kullanıcıları eylemleri konusunda hesap verebilir kılacak net politika ve prosedürler uygulamalıdır. Ayrıca, önyargıyı azaltma, güvenlik ve gizliliği koruma ve yanlış bilgiyi yaymak konusunda önlem almalıdır. Bu, yapay zeka ve LLM’lerin güvenli ve sorumlu bir şekilde kullanılmasını sağlayacak ve bu teknolojilerle ilişkilendirilen riskleri azaltacaktır. Yapay zeka ve LLM’lerin gelişmeye devam etmesi ve daha geniş bir şekilde benimsenmesiyle, sağlam ve etkili bir yönetişim ve güvenlik çerçevesine sahip olmanın vazgeçilmez olduğu bir gerçektir. Bu çerçeve, bu teknolojilerin sadece iyilik için kullanılmasını değil, aynı zamanda bunlarla ilişkilendirilen riskleri azaltmayı da sağlayacaktır. Sorumlu yapay zeka geliştirme ve kullanımı, daha adil ve güvenli bir geleceğin şekillenmesinde hayati bir rol oynamaktadır.</p>
<h2 id="heading-kaynaklar"><strong>Kaynaklar</strong></h2>
<p>[1] Top AI Shops Fail Transparency Test: </p>
<p>[<a target="_blank" href="https://spectrum.ieee.org/ai-ethics">https://spectrum.ieee.org/ai-ethics</a>]</p>
<p>[2] Over half of employees have no idea how their companies use ai: [<a target="_blank" href="https://www.cnbc.com/2023/10/24/over-half-of-employees-have-no-idea-how-their-companies-use-ai.html#:~:text=54%25%20of%20employees%20have%20no,their%20job%20tasks%20by%202028">https://www.cnbc.com/2023/10/24/over-half-of-employees-have-no-idea-how-their-companies-use-ai.html</a>]</p>
<p>[3] UN Asks Google, Microsoft to Help It Figure Out How Risky AI Is: [<a target="_blank" href="https://gizmodo.com/un-wants-figure-out-just-how-dangerous-ai-is-1850966058">https://gizmodo.com/un-wants-figure-out-just-how-dangerous-ai-is-1850966058</a>]</p>
<p>[4] Google to invest another $2B in AI firm Anthropic: Report: [<a target="_blank" href="https://cointelegraph.com/news/google-to-invest-another-two-billion-in-ai-firm-anthropic">https://cointelegraph.com/news/google-to-invest-another-two-billion-in-ai-firm-anthropic</a>]</p>
<p>[5] OpenAI forms new team to assess ‘catastrophic risks’ of AI: [<a target="_blank" href="https://finance.yahoo.com/news/openai-forms-team-whose-mission-171353116.html?guccounter=1#:~:text=OpenAI%20is%20building%20a%20new,researcher%20and%20a%20research%20engineer">https://finance.yahoo.com/news/openai-forms-team-whose-mission-171353116.html?guccounter=1</a>]</p>
<p>[6] Cranium raises $25M to fund enterprise AI monitoring, security, and compliance platform: </p>
<p>[<a target="_blank" href="https://venturebeat.com/security/cranium-raises-25m-to-fund-enterprise-ai-monitoring-security-and-compliance-platform/">https://venturebeat.com/security/cranium-raises-25m-to-fund-enterprise-ai-monitoring-security-and-compliance-platform/</a>]</p>
<p>[7] Generative AI training data sets are now trackable — and often legally complicated: </p>
<p>[<a target="_blank" href="https://www.computerworld.com/article/3709490/generative-ai-training-data-sets-are-now-trackable-and-often-legally-complicated.html">https://www.computerworld.com/article/3709490/generative-ai-training-data-sets-are-now-trackable-and-often-legally-complicated.html</a>]</p>
<p>[8] Google will require Android apps to better moderate AI-generated content: </p>
<p>[<a target="_blank" href="https://www.theverge.com/2023/10/25/23931732/android-generative-ai-rules-app-developer-policy-google">https://www.theverge.com/2023/10/25/23931732/android-generative-ai-rules-app-developer-policy-google</a>]</p>
<p>[9] Tech Experts Warn Humanity Must Act Now to Avoid ‘Societal-Scale’ Damage by AI: </p>
<p>[<a target="_blank" href="https://www.commondreams.org/news/risks-of-artificial-intelligence">https://www.commondreams.org/news/risks-of-artificial-intelligence</a>]</p>
<p>[10] AI firms must be held responsible for harm they cause, ‘godfathers’ of technology say: </p>
<p>[<a target="_blank" href="https://www.theguardian.com/technology/2023/oct/24/ai-firms-must-be-held-responsible-for-harm-they-cause-godfathers-of-technology-say">https://www.theguardian.com/technology/2023/oct/24/ai-firms-must-be-held-responsible-for-harm-they-cause-godfathers-of-technology-say</a>]</p>
]]></content:encoded></item><item><title><![CDATA[Yapay Öğrenme Kış Okulu 2024 | Yapay Zeka Konuşmalarım]]></title><description><![CDATA[Merhaba👋, günümüzde yapay zeka, hayatın her alanında hızla yer buluyor ve artık hepimizin gündeminde. Bu önemli teknolojiyi daha derinlemesine anlatmak ve uygulamalarından bahsetmek adına, 9-10 Kasım tarihlerinde Koç Üniversitesi, Türkiye İş Bankası...]]></description><link>https://www.bulentsiyah.com/yapay-ogrenme-kis-okulu-2024-yapay-zeka-konusmalarim</link><guid isPermaLink="true">https://www.bulentsiyah.com/yapay-ogrenme-kis-okulu-2024-yapay-zeka-konusmalarim</guid><category><![CDATA[generative ai]]></category><category><![CDATA[RAG ]]></category><category><![CDATA[#agent]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Sun, 15 Dec 2024 18:50:06 GMT</pubDate><content:encoded><![CDATA[<p>Merhaba👋, günümüzde yapay zeka, hayatın her alanında hızla yer buluyor ve artık hepimizin gündeminde. Bu önemli teknolojiyi daha derinlemesine anlatmak ve uygulamalarından bahsetmek adına, 9-10 Kasım tarihlerinde Koç Üniversitesi, Türkiye İş Bankası ve Softtech iş birliğiyle düzenlenen Yapay Öğrenme Kış Okulu 2024 etkinliğine konuşmacı olarak katılma fırsatım oldu.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734285018099/8b9b52a1-2adc-49f6-ba87-cf2f6467a933.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734285494806/e9de7a80-25f1-4377-83d2-3b158ce40605.png" alt class="image--center mx-auto" /></p>
<p><strong>Konferans Detayları</strong><br />Generative AI, Büyük Dil Modelleri (LLM) ve yapay zekanın iş süreçlerindeki kullanımı ve genel yapısı gibi güncel konularda derinlemesine bilgileri paylaşma fırsatı buldum.</p>
<p>Konferansta ele aldığım konu başlığı <strong>"Üretken Yapay Zeka: Türleri ve Dönüştürücü Etki Alanları"</strong> idi. Üretken yapay zeka (Generative AI), veri üretimi için eğitilmiş algoritmaları kapsar. Bu teknolojinin, yeni ve orijinal içerikler (metin, görsel, ses, video vb.) üretme kapasitesi, çeşitli endüstrilerde yaratıcılık ve verimlilikte çağ atlattırıyor.</p>
<p>Bu bağlamda, üretken yapay zeka alanıyla ilgili temel kavramları ve teknik detayları paylaştım. Bu modellerin, <strong>metin ve kod üretimi</strong> konularındaki popüler uygulamalarına örnekler sundum. Bunun yanı sıra DeepLearning.AI, Hugging Face gibi kaynaklardan bahsederek katılımcılara bu alanda daha fazla bilgiye nasıl ulaşabileceklerini gösterdim. Üretken yapay zeka ile ilgili birden fazla uygulama alanı bulunmaktadır, bu alanlarla ilgili yeterli bilgileri almak içi eğitim önerilerimden bahsettim.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734287661768/eec4b03e-0022-46b3-8a2e-d91266cf14a4.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-prompt-engineering-ve-stratejiler">Prompt Engineering ve Stratejiler</h3>
<p>Prompt engineering, yapay zeka modellerinden daha etkili çıktı almayı sağlamak için geliştirilmiş bir stratejidir. Konuşmamda, bu tekniğin özellikle büyük dil modellerinde nasıl kullanıldığından bahsettim.</p>
<p>Öne çıkan yöntemlerden biri, modelin adım adım düşünmesi için yapılandırılmasıyla oluşan <strong>"Düşünce Zinciri"</strong> metodudur. Bu yaklaşım, modelin kendi çıktılarını analiz ederek hatalarını düzeltmesine olanak tanır.</p>
<p>Daha karmaşık tekniklerden biri olan <strong>"Düşünce Ağacı"</strong> yaklaşımı ise, bir problemin birden fazla çözüm yoluyla analiz edilip en uygun çözümün çıkarılmasına yöneliktir.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734287851796/c0d81f08-8a00-46cc-bc92-6efef7b0fe59.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-retrieval-augmented-generation-rag">Retrieval-Augmented Generation (RAG)</h3>
<p><strong>RAG</strong>, yapay zeka modellerine daha fazla bilgi entegre etmek için kullanılan bir tekniktir.</p>
<ul>
<li><p>Bu yöntem, modelin bilgiyi belleğinden çekmek yerine bir bilgi kaynağından aktif olarak alıp işlemesine dayanır.</p>
</li>
<li><p>Örneğin, arama motorlarını veya özel veri setlerini kullanarak gerçek zamanlı bilgiyle daha doğru yanıtlar sunabilir.</p>
</li>
</ul>
<p>Bu teknik, işletmelerde büyük miktarda veriyi etkili şekilde işlemek için önemlidir</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734287920856/46c25efc-9537-484a-aee5-de944f51e5f2.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-agent-kullanimi">Agent Kullanımı</h3>
<p>Bir agent (özündeki anlamıyla "ajan"), belirli bir görevi yerine getirmek için yapılandırılan dinamik bir sistemdir.</p>
<ul>
<li><p>Yapay zeka ortamında, <strong>agent</strong> modelleri belirli bir senaryoya uygun kararlar alabilir, otomatik iş akışları oluşturabilir.</p>
</li>
<li><p>Örneğin, bir "satış temsilcisi" gibi davranan ve müşteri sorularına özel yanıt veren agent yapıları geliştirilebilir.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734287954139/220e01c9-2845-4ef8-9bf7-55a41c9848e4.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-fine-tuning-ile-model-iyilestirme">Fine-Tuning ile Model İyileştirme</h3>
<p>Fine-tuning, genel bir modelin spesifik bir konuya veya veri setine odaklanacak şekilde yeniden eğitilmesidir. Bu yöntem sayesinde:</p>
<ul>
<li><p>Modeller, özel endüstriyel gereksinimlere uygun hale getirilir.</p>
</li>
<li><p>Daha doğru ve etkili çıktılar elde edilebilir.</p>
</li>
</ul>
<p>Bu yöntemin farklı endüstri alanlarındaki uygulanabilirliği üzerine örnekler sundum.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734287981829/9f8cb6a4-1a3e-41c1-b699-0092d9242198.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734287996136/e254fa76-a733-4cb4-81ab-206813723af3.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-sonuc-ve-tavsiyeler">Sonuç ve Tavsiyeler</h3>
<p>Üretken yapay zeka, bilgi teknolojilerinde yepyeni bir dönemin kapılarını aralıyor. Bu alandaki gelişmelerden haberdar olmak isteyenlere DeepLearning.AI ve Hugging Face gibi kaynaklardan faydalanmalarını öneririm.</p>
<p>Eğitim ve kendini geliştirme için, <strong>YouTube'daki teknoloji kanalları</strong>, <strong>MOOC platformları</strong> (Coursera, edX) ve üretken yapay zeka projelerine katılım, alan bilgisi oluşturmak için faydalı olabilir.</p>
<p>Etkinlikte bu konulara olan ilgiyi görmek beni oldukça motive etti. Siz de bu alanda sorularınızı veya yorumlarınızı benimle paylaşabilirsiniz!</p>
]]></content:encoded></item><item><title><![CDATA[The Most Complex Visual System Inspiring Computer Vision in the World]]></title><description><![CDATA[I who work in the field of Computer Vision have always been curious about creatures that have different abilities from us. Computer Vision is an interdisciplinary scientific field that deals with how computers can gain meaning from digital images or ...]]></description><link>https://www.bulentsiyah.com/the-most-complex-visual-system-inspiring-computer-vision-in-the-world</link><guid isPermaLink="true">https://www.bulentsiyah.com/the-most-complex-visual-system-inspiring-computer-vision-in-the-world</guid><category><![CDATA[Computer Vision]]></category><category><![CDATA[Artificial Intelligence]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Fri, 22 Apr 2022 01:40:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650590461422/Bc5isq2XX.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I who work in the field of Computer Vision have always been curious about creatures that have different abilities from us. Computer Vision is an interdisciplinary scientific field that deals with how computers can gain meaning from digital images or videos. </p>
<p>While this discipline aims to give the ability of sight to a machine, it stands for not only trying to understand the creature that has the best visual ability, and how that creature senses/perceives the world but also considering the goal of “the best seeing”.</p>
<p>Then, here is Stomatopoda, the creature that has the best visual system in the world!  (Best known as Mantis shrimp)</p>
<p><img src="https://i.pinimg.com/736x/f0/50/de/f050de6ade5748235bec4057ccf4848b.jpg" alt /></p>
<p>The eyes of Mantis shrimps, or stomatopods, are mounted on movable stalks and can move without being dependent to each other. 
It is thought that they have the most complex eyes and the visual systems in the animal kingdom. 
Some breakthrough advancements in technology such as the development of next-generation communication systems might be resulted from the eyes of mantis shrimps.
The eyes of Mantis shrimps can perceive at least 100,000 colors which is ten multiple of the humans’.</p>
<h3 id="heading-lets-get-to-know-the-stomatopodas-better">Let's Get to Know the Stomatopodas Better</h3>
<p>Mantis shrimps, or Stomatopods, are carnivorous marine crustaceans of the order Stomatopoda, which branched out from other members of the Malacostraca class about 340 million years ago.</p>
<p>Each living thing on the earth has a different eyesight. In our eyes, there are receptors that allow us to see colors. Dogs have two types of photoreceptors (green and blue), while humans have three types (red, green, and blue). Moreover, birds have four types of photoreceptors (red, green, blue and UV). How many photoreceptors do you think Mantis shrimps have?</p>
<p><img src="https://static.wixstatic.com/media/2854b7_cdf36b72146c4fcfa70f96b404d5f594~mv2.jpg/v1/fill/w_979,h_634,al_c,q_90,enc_auto/I3_Vision%20in%20Ma.jpg" alt /></p>
<p>Mantis shrimps have 16 types of receptors. Twelve of these receptors are responsible for color perception and the other four are responsible for color filtering. Neurologist Daniel Osorio from the University of Sussex likens this situation to wearing tinted glasses and says that: 'These filters block some frequencies of light while allowing others to pass through. This can be compared to wearing glasses with yellow lenses to reduce the blue light and increase  clarity in cloudy weather.'</p>
<p>Mantis shrimps actively fluoresce during their mating rituals, and the wavelength of this fluorescence matches the wavelengths detected by the eye pigments. Females are fertile only during certain phases of the tidal cycle. Therefore, the ability to sense the phase of the moon can help prevent wasted mating efforts. 
Also, this situation can give information about the extent of the tide to the shrimps, which is very important for species that live in shallow water. Thanks to their superior vision, they can calculate the depth and the distance between an object and themselves. Thus, it even becomes impossible for us to imagine what the Mantis shrimps perceive.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649802682623/dBH26eVqz.JPG" alt /></p>
<p>The eyes of mantis shrimps, which were set on stalks, can be turned as fast as several hundred degrees per second.  As a result, they have a very wide field of view. Mantis shrimps can use all three visual regions in their eyes simultaneously. It can have a tri-central (trinocular) vision, taking the binocular vision seen in many animal species, including humans, one step further.</p>
<p>The light range that mantis shrimps can see is also much wider than that of humans. They can easily see even the lights in the infrared and ultraviolet area for humans. Humans can see from the 400-nanometer wavelength of the blue light to the 700-nanometer wavelength of the red light. Furthermore, Mantis shrimps can range from 250-300 nanometers (which we don't have colors to define, we define ultraviolet) to 800-900 nanometers (similarly, we can only define this as infrared)</p>
<p><img src="https://i.imgur.com/3GagtCt.jpg" alt /></p>
<h3 id="heading-inspired-engineers-aim-to-develop-advanced-systems">Inspired Engineers Aim to Develop Advanced Systems</h3>
<p>Mantis shrimps, thanks to their eyes which have a feature that is not found in any other creature in the animal kingdom, can perceive light known as a circular polarizing filter (CPL). CPL is formed by the 360-degree rotation of electromagnetic waves around the axis of a light beam. This ability of the Mantis shrimps, who are the only creature in the world with the perception of CPL, was discovered in 2008. This feature of the shrimp was first announced in March of the same year by the Current Biology journal. Inspired by this, engineers aim to develop advanced technology communication systems in which light can be used much more effectively thanks to the CPL sensing ability of Mantis shrimps. </p>
<p>The engineers intend to improve the image quality by obtaining a much stronger resolution by the ability of the free rotation movement of the light, as in CPL.</p>
<p>Researchers suspect that wider diversity of photoreceptors in the eyes of Mantis shrimps allows visual information to be preprocessed by the eyes rather than the brain. Otherwise, their brains would have to be bigger to deal with the complex task of color perception. While their eyes are so complex and have not yet been fully understood, the principle of the system seems simple. It has a similar set of sensitivities to the human visual system; but however, it works in the opposite way. In human brains, the inferior temporal cortex has a large number of color-specific neurons that process visual stimuli from the eyes to create colorful experiences. The mantis shrimp instead uses different types of photoreceptors in its eyes to perform the same function as human brain neurons, resulting in a more efficient system for an animal that requires fast color identification. Humans have fewer types of photoreceptors, and even there are more color-tuned neurons, Mantis shrimps seem to have fewer color neurons and more photoreceptor classes.</p>
<p>According to a paper from the University of Queensland, the compound eyes of Mantis shrimp can detect cancer and the activity of neurons because they are sensitive enough to differentiate the polarized lights that are reflected from a cancerous tissue or a healthy tissue. The study claims that this ability could be replicated via a camera through the use of aluminum nanowires to replicate polarization-filtering microvilli on photodiodes.</p>
<p><img src="https://static.scientificamerican.com/sciam/assets/Image/2019/saw0219Adva31_d.png" alt /></p>
<p>This article is compiled from the following sources. I would like to thank <a target="_blank" href="https://www.linkedin.com/in/zehranrgi/">Zehra Nur Günindi</a> and <a target="_blank" href="https://github.com/Meminseeker">Muhammed Emin Arayıcı</a> for their valuable contributions and translation support in the creation of the article.</p>
<h4 id="heading-sources">►►Sources:</h4>
<ul>
<li>https://core.ac.uk/download/pdf/43365607.pdf</li>
<li>https://www.avaschroedl.com/vision-in-mantis-shrimp</li>
<li>https://www.scientificamerican.com/article/camera-mimics-mantis-shrimps-astounding-vision/</li>
<li>https://aeon.co/videos/how-the-mantis-shrimps-six-pupiled-eyes-put-2020-vision-to-shame</li>
<li>https://www.npr.org/sections/health-shots/2016/11/15/501443254/watch-mantis-shrimps-incredible-eyesight-yields-clues-for-detecting-cancer</li>
<li>https://www.science.org.au/curious/earth-environment/all-eyes-reef</li>
<li>https://stringfixer.com/en/Stomatopod</li>
<li>https://en.wikipedia.org/wiki/Mantis_shrimp</li>
<li>https://www.bilimup.com/evrenin-en-guclu-bokscusu-mantis-karidesi</li>
<li>https://evrimagaci.org/dunyanin-en-guclu-yumrugu-mantis-karidesi-1091</li>
<li>https://evrimagaci.org/gorme-yetimiz-anlamsiz-kilan-hayvan-mantis-karidesi-1151</li>
<li>https://en.peopleperproject.com/posts/5169-mantis-shrimp-facts-stomatopoda</li>
<li>http://kusursuzyaratilis.com/mukemmel-gozlere-sahip-mantis-karidesi/</li>
<li>https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;arnumber=6880796</li>
<li>https://ieeexplore.ieee.org/document/6880796</li>
<li>https://www.hayretedecek.com/dunyanin-en-guclu-canlisi-mantis-karidesi/</li>
<li>https://www.ntboxmag.com/2018/10/12/mantis-karidesinde-esinlenen-kamera-gelistiriyor/</li>
<li>https://www.hurriyet.com.tr/dunya/umut-bu-iki-goze-baglandi-14504503</li>
<li>https://www.youtube.com/watch?v=eGuZifKr0h4&amp;ab_channel=LoveNature</li>
<li>https://www.youtube.com/watch?v=t2FTavvZt_c</li>
<li>https://www.youtube.com/watch?v=ujrsE3ljcv4</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Bilgisayarlı Görü’ye İlham Veren Dünyanın En Karmaşık Görsel Sistemi]]></title><description><![CDATA[Bilgisayarlı Görü (Computer Vision) alanında çalışmalar yapan biri olarak aslında bizden farklı görme yeteneklerine sahip canlıları merak ettim. 
Bilgisayarlı Görü, bilgisayarların dijital görüntülerden veya videolardan nasıl bir anlam kazanabileceği...]]></description><link>https://www.bulentsiyah.com/bilgisayarli-goruye-ilham-veren-dunyanin-en-karmasik-gorsel-sistemi</link><guid isPermaLink="true">https://www.bulentsiyah.com/bilgisayarli-goruye-ilham-veren-dunyanin-en-karmasik-gorsel-sistemi</guid><category><![CDATA[Computer Vision]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Tue, 12 Apr 2022 22:36:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1649802285264/oiB8ZB9gX.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Bilgisayarlı Görü (Computer Vision) alanında çalışmalar yapan biri olarak aslında bizden farklı görme yeteneklerine sahip canlıları merak ettim. </p>
<p>Bilgisayarlı Görü, bilgisayarların dijital görüntülerden veya videolardan nasıl bir anlam kazanabileceğiyle ilgilenen disiplinler arası bilimsel bir alandır. Bu disiplin örneğin bir makineye görme yetisi kazandırmayı hedeflerken, tabi ki en iyi gören canlıyı ve onun dünyayı nasıl algiladigini anlamaya çalışmak, “en iyi görme” hedefini de belirlemek anlamına gelir. </p>
<p>O halde işte karşınızında dünyanın en iyi görsel sistemine sahip canlısını Stomatopoda! (Daha yaygın bilinen adıyla Mantis karidesi)</p>
<p><img src="https://i.pinimg.com/736x/f0/50/de/f050de6ade5748235bec4057ccf4848b.jpg" alt /></p>
<p>Mantis karidesi veya stomatopod’larin gözleri hareketli saplara monte edilmiştir ve birbirinden bağımsız hareket edebilir. Hayvanlar alemindeki en karmaşık gözlere ve görsel sisteme sahip oldukları düşünülüyor. Mantis karideslerinin sahip olduğu gözler, yeni nesil iletişim sistemleri geliştirilmesini sağlayarak teknolojide çığır açabilir. Gözleri en az 100 bin rengi algılaması sağlayan çok sayıda hücreye sahip bulunuyor. Bu rakam insan gözünün algılayabildiği renk sayısının 10 katı. </p>
<h3 id="heading-stomatopodalari-daha-iyi-taniyalim">Stomatopoda’ları Daha İyi Tanıyalım</h3>
<p>Mantis karidesi veya stomatopodlar, yaklaşık 340 milyon yıl önce Malacostraca sınıfının diğer üyelerinden dallanan Stomatopoda takımının etçil deniz kabuklularıdır. </p>
<p>Yeryüzündeki bütün canlıların görme yetenekleri birbirinden farklıdır. Gözlerimizde renkleri görmemizi sağlayan reseptörler vardır. Köpeklerde iki fotoreseptör (yeşil ve mavi), insanlarda üç fotoreseptör (mavi, kırmızı ve yeşil), kuşlarda ise dört fotoreseptör ( kırmızı, yeşil, mavi ve UV) vardır. Peki sizce Mantis karidesleri kaç fotoreseptöre sahiptir?</p>
<p><img src="https://static.wixstatic.com/media/2854b7_cdf36b72146c4fcfa70f96b404d5f594~mv2.jpg/v1/fill/w_979,h_634,al_c,q_90,enc_auto/I3_Vision%20in%20Ma.jpg" alt /></p>
<p>Mantis karideslerinin tam 16 çeşit reseptörü vardır. Bu reseptörlerin 12 tanesi renk algısı, 4 tanesi de renk filtresi içindir. Sussex Üniversitesi’nden nörolog Daniel Osorio bu durumu renkli gözlük takmaya benzetir ve şöyle anlatır: ‘Bu filtreler bazı frekanstaki ışıkları engellerken bazılarının da geçmesine izin verir. Bunu mavi ışığı azaltmak ve bulutlu havalarda netliği arttırmak için sarı mercekli gözlük takmaya benzetebiliriz.’</p>
<p>Mantis karidesleri çiftleşme ritüelleri sırasında aktif olarak flüoresans yayarlar ve bu flüoresansın dalga boyu göz pigmentleri tarafından tespit edilen dalga boylarıyla eşleşir. Dişiler yalnızca gelgit döngüsünün belirli evrelerinde doğurgandır. Bu nedenle, ayın evresini algılama yeteneği, boşa giden çiftleşme çabalarını önlemeye yardımcı olabilir. Ayrıca bu karideslere, kıyıya yakın sığ sularda yaşayan türler için önemli olan gelgitin boyutu hakkında bilgi verebilir. Bu üstün görme yetenekleri sayesinde derinliği ve tüm nesnelerin kendisine olan uzaklığını hesaplayabilmektedirler. Böylece Mantis karideslerinin algıladıklarını hayal bile etmemiz imkânsız hale gelir.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649802682623/dBH26eVqz.JPG" alt /></p>
<p>Sapların üzerine yerleşmiş gözleri saniyede birkaç yüz derece çevrilebilecek kadar hızlı hareket eder. Böylece çok geniş bir görüş alanına sahip olur. Gözündeki üç görsel bölgeyi de eşzamanlı olarak kullanabilmektedir. İnsan da dahil olmak üzere birçok hayvan türünde görülen dürbünsel (binoküler) görüşü bir kademe öteye taşıyarak, üç-merkezli (trinoküler) görüşe sahip olabilmektedir.</p>
<p>Mantis karideslerinin görebildiği ışık aralığı da insandan çok daha geniştir. İnsan için kızılötesi ve morötesi olan alandaki ışıkları bile çok rahatlıkla görebilmektedirler. İnsan mavi ışığın 400 nanometrelik dalga boyundan, kırmızı ışığın 700 nanometrelik dalga boyuna kadar görebilmektedir. Mantis karidesleri ise 250-300 nanometreden başlayıp (ki burasını tanımlayabilecek renklere sahip değiliz, mor ötesi olarak tanımlamaktayız) 800-900 nanometreye kadar ulaşabilmektedirler (ki benzer şekilde biz burayı sadece kızılötesi olarak tanımlayabiliyoruz)</p>
<p><img src="https://i.imgur.com/3GagtCt.jpg" alt /></p>
<h3 id="heading-ilham-alan-muhendisler-ileri-sistemleri-gelistirmeyi-hedefliyor">İlham Alan Mühendisler İleri Sistemleri Geliştirmeyi Hedefliyor</h3>
<p>Gözleri sayesinde hayvanlar âleminde diğer hiçbir canlıda olmayan bir özelliğe sahip olan Mantis karidesleri dairesel polarize filtre (CPL) olarak bilinen ışığı algılayabiliyorlar. CPL, bir ışık doğrusunun ekseni etrafındaki elektromanyetik dalgaların, 360 derece dönmesiyle oluşuyor. Dünyada CPL algısına sahip tek canlı olan Mantis karidesinin bu yeteneği 2008 yılında keşfedildi. Karidesin bu özelliği ilk olarak aynı yıl Mart ayında Current Biology dergisi tarafından duyuruldu. Bu noktadan ilham alan mühendisler, Mantis karideslerinin CPL algılama yeteneği sayesinde, ışığın çok daha etkin kullanılabileceği ileri teknoloji iletişim sistemleri geliştirmeyi hedefliyor. Mühendisler, ışığın CPL’de olduğu gibi serbest dönüş hareketinden faydalanılması sayesinde, çok daha güçlü çözünürlük elde ederek görüntü kalitesini artırmayı amaçlıyor.</p>
<p>Araştırmacılar, Mantis karideslerinin gözlerindeki daha geniş fotoreseptör çeşitliliğinin, görsel bilginin beyin yerine gözler tarafından önceden işlenmesine izin verdiğinden şüpheleniyorlar. Aksi takdirde, beyinlerinin renk algısının karmaşık göreviyle başa çıkmak için daha büyük olması gerekirdi. Gözlerin kendisi karmaşık ve henüz tam olarak anlaşılmamış olsa da, sistemin prensibi basit görünüyor. İnsan görsel sistemine benzer bir dizi duyarlılığa sahiptir, ancak tam tersi şekilde çalışır. İnsan beyninde, alt temporal korteks, renkli deneyimler yaratmak için gözlerden gelen görsel uyarıları işleyen çok sayıda renge özgü nörona sahiptir. Mantis karidesi bunun yerine, insan beyni nöronlarıyla aynı işlevi gerçekleştirmek için gözlerindeki farklı tipteki fotoreseptörleri kullanır, bu da hızlı renk tanımlaması gerektiren bir hayvan için daha verimli bir sistemle sonuçlanır. İnsanlarda daha az fotoreseptör türü var ancak daha fazla renk ayarlı nöron varken, mantis karidesleri daha az renk nöronuna ve daha fazla fotoreseptör sınıfına sahiptir.</p>
<p>Queensland Üniversitesi'nden araştırmacılar tarafından yapılan bir yayın, mantis karidesinin bileşik gözlerinin kanseri ve nöronların aktivitesini tespit edebileceğini çünkü kanserli ve sağlıklı dokudan farklı şekilde yansıyan polarize ışığı algılamaya duyarlı olduklarını belirtti. Çalışma, bu yeteneğin, fotodiyotların üzerinde polarizasyon filtreleyen mikrovillileri çoğaltmak için alüminyum nanotellerin kullanımı yoluyla bir kamera aracılığıyla kopyalanabileceğini iddia ediyor.</p>
<p><img src="https://static.scientificamerican.com/sciam/assets/Image/2019/saw0219Adva31_d.png" alt /></p>
<p>Bu yazı aşağıdaki kaynaklardan derlenmiş bir çalışmadır. Yazının oluşmasında değerli katkıları ve çeviri desteklerinden dolayı <a target="_blank" href="https://www.linkedin.com/in/zehranrgi/">Zehra Nur Günindi</a> ve <a target="_blank" href="https://github.com/Meminseeker">Muhammed Emin Arayıcı</a>’ya teşekkür ederim.</p>
<h4 id="heading-kaynaklar">►►Kaynaklar:</h4>
<ul>
<li>https://core.ac.uk/download/pdf/43365607.pdf</li>
<li>https://www.avaschroedl.com/vision-in-mantis-shrimp</li>
<li>https://www.scientificamerican.com/article/camera-mimics-mantis-shrimps-astounding-vision/</li>
<li>https://aeon.co/videos/how-the-mantis-shrimps-six-pupiled-eyes-put-2020-vision-to-shame</li>
<li>https://www.npr.org/sections/health-shots/2016/11/15/501443254/watch-mantis-shrimps-incredible-eyesight-yields-clues-for-detecting-cancer</li>
<li>https://www.science.org.au/curious/earth-environment/all-eyes-reef</li>
<li>https://stringfixer.com/tr/Stomatopod</li>
<li>https://en.wikipedia.org/wiki/Mantis_shrimp</li>
<li>https://www.bilimup.com/evrenin-en-guclu-bokscusu-mantis-karidesi</li>
<li>https://evrimagaci.org/dunyanin-en-guclu-yumrugu-mantis-karidesi-1091</li>
<li>https://evrimagaci.org/gorme-yetimizi-anlamsiz-kilan-hayvan-mantis-karidesi-1151</li>
<li>https://tr.peopleperproject.com/posts/5169-mantis-shrimp-facts-stomatopoda</li>
<li>http://kusursuzyaratilis.com/mukemmel-gozlere-sahip-mantis-karidesi/</li>
<li>https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;arnumber=6880796</li>
<li>https://ieeexplore.ieee.org/document/6880796</li>
<li>https://www.hayretedeceksin.com/dunyanin-en-guclu-canlisi-mantis-karidesi/</li>
<li>https://www.ntboxmag.com/2018/10/12/mantis-karidesinden-esinlenen-kamera-gelistiriliyor/</li>
<li>https://www.hurriyet.com.tr/dunya/umut-bu-iki-goze-baglandi-14504503</li>
<li>https://www.youtube.com/watch?v=eGuZifKr0h4&amp;ab_channel=LoveNature</li>
<li>https://www.youtube.com/watch?v=t2FTavvZt_c</li>
<li>https://www.youtube.com/watch?v=ujrsE3ljcv4</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Yapay Zeka ve Teknoloji Üzerine Kitap İncelemeleri]]></title><description><![CDATA[Merhaba 👋, bir alıntı ile başlayayım:
Hayatınız iki yoldan değişir: tanıştığınız insanlar ve okuduğunuz kitaplar vasıtasıyla. Eğer yeni kitaplar okumazsanız ne olur biliyor musunuz? Değişmessiniz. Ve değişmiyorsanız büyümüyorsunuz demektir. Bu kadar...]]></description><link>https://www.bulentsiyah.com/yapay-zeka-ve-teknoloji-uzerine-kitap-incelemeleri</link><guid isPermaLink="true">https://www.bulentsiyah.com/yapay-zeka-ve-teknoloji-uzerine-kitap-incelemeleri</guid><category><![CDATA[books]]></category><category><![CDATA[Artificial Intelligence]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Sun, 22 Aug 2021 15:42:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1629447724509/lpXquP0Gs.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Merhaba 👋, bir alıntı ile başlayayım:</p>
<p>Hayatınız iki yoldan değişir: tanıştığınız insanlar ve okuduğunuz kitaplar vasıtasıyla. Eğer yeni kitaplar okumazsanız ne olur biliyor musunuz? Değişmessiniz. Ve değişmiyorsanız büyümüyorsunuz demektir. Bu kadar basit. (Diri Diri Yenmeden Köpekbalıklarıyla Yüzmek! adlı bestseller'in yazarı Harvey B. Mackay okumanın gücü hakkında sözleri)</p>
<p>Arkadaşım İlker Aytı ile birlikte naçizane Yapay zeka ve Teknoloji kitapları üzerine konuştuğumuz videolar çekiyoruz. Bu videolarda alanında uzman diğer arkadaşlarımı da konuk alıp videolar hazırlıyoruz. Gül Bulut ile Ethem Hocanın kitabını birlikte yorumlama şansına sahip olduk (bir kitaba daha hazırlanıyoruz). Şefik ilkin Serengil ve Semih Arıcan ile de çok yakında videolarımızı çekmiş olacağız.</p>
<h3 id="biz-kimiz-ve-videolari-neden-cekiyoruz">Biz kimiz ve videoları neden çekiyoruz?</h3>
<p>Videoların amatörlüğünün farkındayız zaten amacımız YouTuber olmak değil ve daha önce herkese açık bir ortamda veya kameralar önünde konuşmadık (ama gittikçe iyileşiyoruz :) ). Sadece Yapay zeka alanına olan ilgimizden okuduğumuz kitapların bizim için anlamlı bölümleri üzerine konuşuyoruz. İzleyenlere sadece bu alanla ilgili olan kitapların listelendiğini görmeleri ve kitapta aşağı yukarı nelerden bahsedildiğini öğrenmeleri bizim için yeterli olacaktır. Şimdi biz kimiz biraz bundan bahsedelim çünkü videoların başında artık kendimizden bahsetmiyoruz. </p>
<p><strong>İlker Aytı</strong>, Yeditepe Üniversitesi Bilgisayar mühendisliği bölümünden mezun. 9 yıllı aşkın süredir profesyonel hayatta çalışmaktadır. Papara, Türk Traktör, Procter &amp; Gamble, Borusan, RE/MAX, Philip Morris International gibi firmalar için cloud native mikro servis tabanlı uygulamalar (Backend ağırlıklı) üzerine geliştirmeler yapıyorken son 3 yıldır Yapay Zeka ve Derin öğrenme alanında da çalışmalar yürütmektedir.</p>
<p><strong>Gül bulut</strong>, 2013 yılında Uludağ Elektronik bölümünden mezun oldu. ilk 2 yıl savunma sanayi işbirlikleri ve Tübitak projeleri üzerine çalıştı. 2015 yılı itibariyle İTÜ Bilişim enstitüsünde Uydu haberleşmesi ve Uzaktan Algılama bölümünde Yüksek Lisansa başladı. Kariyer hayatında da o yıldan itibaren Softtech'te devam ettirmektedir. Çeşitli projelerde yazılım yaşam döngüsünde roller değiştirerek bu döngünün içinde bulundu. Softtech-Aircar işbirliğinde, Aircar projelerinde çalışmalarına devam etmektedir. </p>
<p>Ben <strong>Bülent Siyah</strong>, ben de 2012 yılında Bilgisayar mühendisliği bölümünden mezun oldum. ilk 5 yıl Android ve IOS işletim sistemleri için uygulama geliştirdim. 2017 den beri Softtech isimli bir şirkette Yapay zeka alanında çalışıyorum. Son 2 yıldır da Aircar isimli uçan araba geliştiren bir şirkette çalışıyorum (Softtech-Aircar işbirliğinden dolayı 2 şirkette projeler geliştiriyorum). İnsansız Hava Araçlarında (veya büyük Dronelar) kullanılmak üzere Derin Öğrenme ve Bilgisayarlı Görü projeleri üzerine çalışıyorum. Çalışmalarım Acil İniş Yeri Tanımlama Sistemi, Havada Tehdit Algılama ve Kaçış Sistemi, Otonom Port İnişine Rehberlik, GPS kesintisi Durumuna Karşı Görüye Dayalı Navigasyon Sistemi geliştirmek.</p>
<h3 id="inceledigimiz-kitaplar">İncelediğimiz Kitaplar</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1629634339556/cXqdEjzqU.jpeg" alt="Listemizdeki Kitapların bir kısmının resmi" />
Öncelikle liste sürekli yenilendiği için ben oynatma listesinin linkini paylaşıyım.
<a target="_blank" href="https://www.youtube.com/watch?v=7KM1RgqnsRo&amp;list=PLxN-OTv-e8soonDsT2dvzREh9faIA5GYc&amp;ab_channel=BulentSiyah">Yapay Zeka ve Teknoloji Üzerine Kitap İncelemeleri Youtube Linki</a></p>
<p>Şimdi sırayla inceleme videolarını paylaşıyım.</p>
<h4 id="9-yapay-ogrenme-yeni-yapay-zeka-ethem-alpaydin-kitap-inceleme">#9 - Yapay Öğrenme: Yeni Yapay Zeka (Ethem ALPAYDIN) - Kitap İnceleme</h4>
<iframe width="560" height="315" src="https://www.youtube.com/embed/7KM1RgqnsRo"></iframe>

<h4 id="8-yapay-zeka-ve-gelecek-editor-prof-dr-gonca-telli-kitap-inceleme">#8 - Yapay Zeka ve Gelecek (Editör: Prof. Dr. Gonca Telli) - Kitap İnceleme</h4>
<iframe width="560" height="315" src="https://www.youtube.com/embed/L5wMEI5wAVU"></iframe>

<h4 id="7-bilincin-gizemi-john-r-searle-kitap-inceleme">#7 - Bilincin Gizemi (John R. Searle) - Kitap İnceleme</h4>
<iframe width="560" height="315" src="https://www.youtube.com/embed/LLQvz6YTWQw"></iframe>

<h4 id="6-dijital-donusum-yapay-zeka-harvard-business-review-kitap-inceleme">#6 - Dijital Dönüşüm-Yapay Zeka (Harvard Business Review) - Kitap İnceleme</h4>
<iframe width="560" height="315" src="https://www.youtube.com/embed/w9V1liz3jeM"></iframe>

<h4 id="5-yeni-dunya-yeni-ag-cem-say-kitap-inceleme">#5 - Yeni Dünya Yeni Ağ (Cem Say) - Kitap İnceleme</h4>
<iframe width="560" height="315" src="https://www.youtube.com/embed/c4E84MWtKF0"></iframe>

<h4 id="4-ben-robot-isaac-asimov-kitap-inceleme">#4 - Ben Robot (ISAAC ASIMOV) - Kitap İnceleme</h4>
<iframe width="560" height="315" src="https://www.youtube.com/embed/_G3Rsj5ii6Y"></iframe>

<h4 id="3-robotlarin-yukselisi-yapay-zeka-ve-issiz-bir-gelecek-tehlikesi-martin-ford-kitap-inceleme">#3 - Robotların Yükselişi: Yapay Zeka ve İşsiz Bir Gelecek Tehlikesi (Martin Ford) - Kitap İnceleme</h4>
<iframe width="560" height="315" src="https://www.youtube.com/embed/Yze-TgKJzdI"></iframe>

<h4 id="yapay-ogrenme-yeni-yapay-zeka-ethem-alpaydin-kitap-inceleme">Yapay Öğrenme: Yeni Yapay Zeka (Ethem ALPAYDIN) - Kitap İnceleme</h4>
<iframe width="560" height="315" src="https://www.youtube.com/embed/e_hANbGnHD0"></iframe>

<h4 id="2-50-soruda-yapay-zeka-cem-say-kitap-inceleme">#2 - 50 Soruda Yapay Zeka (Cem Say) - Kitap İnceleme</h4>
<iframe width="560" height="315" src="https://www.youtube.com/embed/KbXBZXob9OE"></iframe>


<h4 id="1-python-ile-derin-ogrenme-francois-chollet-kitap-inceleme">#1 - Python ile Derin Öğrenme (François Chollet) - Kitap İnceleme</h4>
<iframe width="560" height="315" src="https://www.youtube.com/embed/dYSt3p4a6HY"></iframe>



<p>►►Kitap İncelemesini Yapanların kişisel blog ve hesapları</p>
<p>► İlker Aytı </p>
<ul>
<li>https://www.ilkerayti.com</li>
<li>LinkedIn: https://www.linkedin.com/in/ilkerayti</li>
<li>Github: https://github.com/iayti​​</li>
<li>Kaggle: https://www.kaggle.com/ilkerayti​​</li>
</ul>
<p>►Gül Bulut </p>
<ul>
<li>LinkedIn: https://www.linkedin.com/in/fgulyvz</li>
<li>Github: https://github.com/gulbulut​</li>
<li>Kaggle: https://www.kaggle.com/gulyvz</li>
</ul>
<p>►Bülent Siyah </p>
<ul>
<li>https://www.bulentsiyah.com​​</li>
<li>LinkedIn: https://www.linkedin.com/in/bulentsiyah</li>
<li>Github: https://github.com/bulentsiyah​​</li>
<li>Kaggle: https://www.kaggle.com/bulentsiyah</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[The Highlights of Elon Musk's Tesla Artificial Intelligence(AI) Day - 20.08.2021]]></title><description><![CDATA[Hi, Tesla AI day was very impressive. Here are the highlights of the Tesla Artificial Intelligence day. “There’s a tremendous amount of work to make it work and that’s why we need talented people to join and solve the problem,” said Musk
Very great s...]]></description><link>https://www.bulentsiyah.com/the-highlights-of-elon-musks-tesla-artificial-intelligenceai-day-20082021</link><guid isPermaLink="true">https://www.bulentsiyah.com/the-highlights-of-elon-musks-tesla-artificial-intelligenceai-day-20082021</guid><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[robotics]]></category><category><![CDATA[neural networks]]></category><category><![CDATA[news]]></category><category><![CDATA[Culture]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Sun, 22 Aug 2021 13:46:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1629636893892/H3PGCROjm.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, Tesla AI day was very impressive. Here are the highlights of the Tesla Artificial Intelligence day. “There’s a tremendous amount of work to make it work and that’s why we need talented people to join and solve the problem,” said Musk</p>
<p>Very great summary of Lex Fridman</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/ABbDB6xri8o"></iframe>

<h2 id="tesla-bot-a-definitely-real-humanoid-robot">Tesla Bot: A definitely real humanoid robot</h2>
<p>When Tesla talks about using its advanced technology in applications outside of cars, we didn’t think he was talking about robot slaves. That’s not an exaggeration. CEO Elon Musk envisions a world in which the human drudgery like grocery shopping, “the work that people least like to do,” can be taken over by humanoid robots like the Tesla Bot. The bot is 5’8″, 125 pounds, can deadlift 150 pounds, walk at 5 miles per hour and has a screen for a head that displays important information.</p>
<p><img src="https://static.euronews.com/articles/stories/05/99/77/62/808x439_cmsv2_8241961b-4ca6-5d2f-83c0-11358319d0f3-5997762.jpg" alt="Tesla Bot" /></p>
<p>“It’s intended to be friendly, of course, and navigate a world built for humans,” said Musk. “We’re setting it such that at a mechanical and physical level, you can run away from it and most likely overpower it.”</p>
<p>Because everyone is definitely afraid of getting beat up by a robot that’s truly had enough, right?</p>
<p><img src="https://static.euronews.com/articles/stories/05/99/77/62/808x437_cmsv2_f35fb846-ed0a-5965-aaec-c941a9e7d54f-5997762.jpg" alt="Tesla Bot" /></p>
<p>The bot, a prototype of which is expected for next year, is being proposed as a non-automotive robotic use case for the company’s work on neural networks and its Dojo advanced supercomputer.</p>
<h2 id="d1-chip-unveiling-of-the-chip-to-train-dojo">D1 Chip: Unveiling of the chip to train Dojo</h2>
<p><img src="https://techcrunch.com/wp-content/uploads/2021/08/Screen-Shot-2021-08-20-at-1.53.25-pm.png?resize=2048,1210" alt="D1 Chip" /></p>
<p>Tesla director Ganesh Venkataramanan unveiled Tesla’s computer chip, designed and built entirely in-house, that the company is using to run its supercomputer, Dojo. Much of Tesla’s AI architecture is dependent on Dojo, the neural network training computer that Musk says will be able to process vast amounts of camera imaging data four times faster than other computing systems. The idea is that the Dojo-trained AI software will be pushed out to Tesla customers via over-the-air updates. </p>
<p>The chip that Tesla revealed on Thursday is called “D1,” and it contains a 7 nm technology. Venkataramanan proudly held up the chip that he said has GPU-level compute with CPU connectivity and twice the I/O bandwidth of “the state of the art networking switch chips that are out there today and are supposed to be the gold standards.” He walked through the technicalities of the chip, explaining that Tesla wanted to own as much of its tech stack as possible to avoid any bottlenecks. </p>
<p>Aside from limited availability, the overall goal of taking the chip production in-house is to increase bandwidth and decrease latencies for better AI performance.</p>
<p><img src="https://electrek.co/wp-content/uploads/sites/3/2021/08/Screen-Shot-2021-08-20-at-5.38.00-AM.jpg?resize=2048,1305" alt /></p>
<p>“We can do compute and data transfers simultaneously, and our custom ISA, which is the instruction set architecture, is fully optimized for machine learning workloads,” said Venkataramanan at AI Day. “This is a pure machine learning machine.”</p>
<p>Venkataramanan also revealed a “training tile” that integrates multiple chips to get higher bandwidth and an incredible computing power of 9 petaflops per tile and 36 terabytes per second of bandwidth. Together, the training tiles compose the Dojo supercomputer.</p>
<h2 id="supercomputer-dojo-to-full-self-driving-and-beyond">Supercomputer Dojo :  To Full Self-Driving and beyond</h2>
<p>Many of the speakers at the AI Day event noted that Dojo will not just be a tech for Tesla’s “Full Self-Driving” (FSD) system, it’s definitely impressive advanced driver assistance system that’s also definitely not yet fully self-driving or autonomous. The powerful supercomputer is built with multiple aspects, such as the simulation architecture, that the company hopes to expand to be universal and even open up to other automakers and tech companies.</p>
<p><img src="https://electrek.co/wp-content/uploads/sites/3/2021/08/Screen-Shot-2021-08-19-at-9.58.16-PM.jpg" alt /></p>
<p>“This is not intended to be just limited to Tesla cars,” said Musk. “Those of you who’ve seen the full self-driving beta can appreciate the rate at which the Tesla neural net is learning to drive. And this is a particular application of AI, but I think there’s more applications down the road that will make sense.”</p>
<p><img src="https://electrek.co/wp-content/uploads/sites/3/2021/08/Screen-Shot-2021-08-19-at-9.59.22-PM.jpg" alt /></p>
<p>Musk said Dojo is expected to be operational next year, at which point we can expect talk about how this tech can be applied to many other use cases.</p>
<h2 id="solving-computer-vision-problems">Solving computer vision problems</h2>
<p>Tesla’s head of AI, Andrej Karpathy, described Tesla’s architecture as “building an animal from the ground up” that moves around, senses its environment and acts intelligently and autonomously based on what it sees.</p>
<p>Karpathy illustrated how Tesla’s neural networks have developed over time, and how now, the visual cortex of the car, which is essentially the first part of the car’s “brain” that processes visual information, is designed in tandem with the broader neural network architecture so that information flows into the system more intelligently.</p>
<p>The two main problems that Tesla is working on solving with its computer vision architecture are temporary occlusions (like cars at a busy intersection blocking Autopilot’s view of the road beyond) and signs or markings that appear earlier in the road (like if a sign 100 meters back says the lanes will merge, the computer once upon a time had trouble remembering that by the time it made it to the merge lanes).</p>
<p>To solve for this, Tesla engineers fell back on a spatial recurring network video module, wherein different aspects of the module keep track of different aspects of the road and form a space-based and time-based queue, both of which create a cache of data that the model can refer back to when trying to make predictions about the road.</p>
<p>The company flexed its over 1,000-person manual data labeling team and walked the audience through how Tesla auto-labels certain clips, many of which are pulled from Tesla’s fleet on the road, in order to be able to label at scale. With all of this real-world info, the AI team then uses incredible simulation, creating “a video game with Autopilot as the player.” The simulations help particularly with data that’s difficult to source or label, or if it’s in a closed loop.</p>
<p>“We basically want to encourage anyone who is interested in solving real-world AI problems at either the hardware or the software level to join Tesla, or consider joining Tesla,” said Musk.</p>
<p>All broadcast</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/j0z4FweCy4M?t=2828"></iframe>


<p>Referanslar:
[https://techcrunch.com/2021/08/19/top-five-highlights-of-elon-musks-tesla-ai-day/]
(https://techcrunch.com/2021/08/19/top-five-highlights-of-elon-musks-tesla-ai-day/)</p>
<p><a target="_blank" href="https://electrek.co/2021/08/20/tesla-dojo-supercomputer-worlds-new-most-powerful-ai-training-machine/">https://electrek.co/2021/08/20/tesla-dojo-supercomputer-worlds-new-most-powerful-ai-training-machine/</a> </p>
<p><a target="_blank" href="https://electrek.co/2021/08/19/tesla-bot-humanoid-robot/">https://electrek.co/2021/08/19/tesla-bot-humanoid-robot/</a></p>
<p><a target="_blank" href="https://dronedj.com/2021/08/20/elon-musk-tesla-bot/">https://dronedj.com/2021/08/20/elon-musk-tesla-bot/</a></p>
<p><a target="_blank" href="https://electrek.co/2021/06/21/elon-musk-tesla-ai-day-progress-recruit-talent/">https://electrek.co/2021/06/21/elon-musk-tesla-ai-day-progress-recruit-talent/</a></p>
]]></content:encoded></item><item><title><![CDATA[The document in which the words "Artificial Intelligence" were written for the first time]]></title><description><![CDATA[Click to see this work on My Kaggle Profile

The term “artificial intelligence” is coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Natha...]]></description><link>https://www.bulentsiyah.com/the-document-in-which-the-words-artificial-intelligence-were-written-for-the-first-time</link><guid isPermaLink="true">https://www.bulentsiyah.com/the-document-in-which-the-words-artificial-intelligence-were-written-for-the-first-time</guid><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Wed, 16 Jun 2021 11:16:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1623842008473/Vcpn8oeF5.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Click to see this work on <a target="_blank" href="https://www.kaggle.com/general/246669">My Kaggle Profile</a></p>
<p><img src="https://rockfound.rockarch.org/documents/20181/35634/AI.jpg/58a8bdff-d6a9-46f0-a27b-8bcb02733c19?t=1490820335123" alt="from John McCarthy to Robert S. Morrison" /></p>
<p>The term “artificial intelligence” is coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place a year later, in July and August 1956, is generally considered as the official birthdate of the new field.  Most of us know this, but in a book I read, I learned that there was a letter with the first word "artificial intelligence". This letter was written by John McCarthy while requesting funding from the Rockefeller Foundation. This got me very excited and for those who are curious about the rest of the letter, <a target="_blank" href="https://rockfound.rockarch.org/digital-library-listing/-/asset_publisher/yYxpQfeI4W8N/content/proposal-for-the-dartmouth-summer-research-project-on-artificial-intelligence">"Proposal for the Dartmouth summer research project on artificial intelligence" </a>
<a target="_blank" href="https://rockfound.rockarch.org/documents/20181/35639/AI.pdf/a6db3ab9-0f2a-4ba0-8c28-beab66b2c062">PDF download</a></p>
<h2 id="proposal-for-the-dartmouth-summer-research-project-on-artificial-intelligence">Proposal for the Dartmouth summer research project on artificial intelligence</h2>
<h3 id="dartmouth-workshop">Dartmouth workshop</h3>
<p><img src="https://miro.medium.com/max/724/0*t67kTDDGNaLsL_1t" alt /></p>
<p>The Dartmouth Summer Research Project on Artificial Intelligence was a 1956 summer workshop widely considered to be the founding event of artificial intelligence as a field.</p>
<p>The project lasted approximately six to eight weeks and was essentially an extended brainstorming session. Eleven mathematicians and scientists originally planned to attend; not all of them attended, but more than ten others came for short times.
<img src="https://miro.medium.com/max/724/0*8MW8iP2QC_WNhmiW" alt /></p>
<p>###Background
In 1955, John McCarthy, then a young Assistant Professor of Mathematics at Dartmouth College, decided to organize a group to clarify and develop ideas about thinking machines. He picked the name 'Artificial Intelligence' for the new field.</p>
<p>In early 1955, McCarthy approached the Rockefeller Foundation to request funding for a summer seminar at Dartmouth for about 10 participants. In June, he and Claude Shannon, a founder of information theory then at Bell Labs, met with Robert Morison, Director of Biological and Medical Research to discuss the idea and possible funding, though Morison was unsure whether money would be made available for such a visionary project.</p>
<p>On September 2, 1955, the project was formally proposed by McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon. The proposal is credited with introducing the term 'artificial intelligence'.</p>
<p>###The Proposal states
<img src="https://ichi.pro/assets/images/max/724/0*HDEnJ8a9pTO83Wsq" alt />
We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.</p>
<p>The proposal goes on to discuss computers, natural language processing, neural networks, theory of computation, abstraction and creativity (these areas within the field of artificial intelligence are considered still relevant to the work of the field).</p>
<p>###Dates
The Dartmouth Workshop is said to have run for six weeks in the summer of 1956. Ray Solomonoff's notes written during the Workshop, however, say it ran for roughly eight weeks, from about June 18 to August 17. Solomonoff's Dartmouth notes start on June 22; June 28 mentions Minsky, June 30 mentions Hanover, N.H., July 1 mentions Tom Etter. On August 17, Ray gave a final talk.</p>
<p>###Event and aftermath
Did they reach their goals in those 2 months? I mean, the 10 best researchers in the field were under one roof. They must have found something.</p>
<p>They failed to do what they intended, and no projects were launched. McCarthy explains three reasons for his failure:
First, the Rockefeller Foundation only gave them half of the money they wanted.
Second, and that's the main reason, all of the participants had their own research agenda and didn't get very far from them.
Third: Participants came to Dartmouth at different times and at different times.</p>
<p>###The Next Fifty Years
<img src="https://ichi.pro/assets/images/max/724/0*AfZzi7EZtWL1k281" alt />
Trenchard More, John McCarthy, Marvin Minsky, Oliver Selfridge, and Ray Solomonoff</p>
<p>Sources:
https://en.wikipedia.org/wiki/Dartmouth_workshop
https://ichi.pro/tr/dartmouth-workshop-yapay-zekanin-dogdugu-yer-58021580368274
https://www.researchgate.net/journal/Ai-Magazine-0738-4602
http://www-formal.stanford.edu/jmc/slides/dartmouth/dartmouth/node1.html
http://raysolomonoff.com/dartmouth/boxa/dart56more5th6thweeks.pdf
http://raysolomonoff.com/dartmouth/dartray.pdf
https://medium.com/@pemey/whos-that-kid-laughing-with-high-socks-in-the-middle-of-summer-7801a34feeef
https://www.cantorsparadise.com/the-birthplace-of-ai-9ab7d4e5fb00
https://chsasank.github.io/classic_papers/darthmouth-artifical-intelligence-summer-resarch-proposal.html
https://www.researchgate.net/publication/220494500_Cybernetics_Automata_Studies_and_the_Dartmouth_Conference_on_Artificial_Intelligence
https://rockfound.rockarch.org/digital-library-listing/-/asset_publisher/yYxpQfeI4W8N/content/letter-from-robert-s-morison-to-john-mccarthy-1955-november-30</p>
]]></content:encoded></item><item><title><![CDATA[Coordinate Calculation from Aerial Images]]></title><description><![CDATA[https://github.com/bulentsiyah/Coordinate-Calculation-from-Aerial-Images 
This study is an example for a simple computer vision. In the study, the coordinate point is based on calculating the coordinate information of another point on the picture by ...]]></description><link>https://www.bulentsiyah.com/coordinate-calculation-from-aerial-images</link><guid isPermaLink="true">https://www.bulentsiyah.com/coordinate-calculation-from-aerial-images</guid><category><![CDATA[opencv]]></category><category><![CDATA[python beginner]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Sat, 23 Jan 2021 16:03:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1611504261754/BacnuJfeR.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p> <a target="_blank" href="https://github.com/bulentsiyah/Coordinate-Calculation-from-Aerial-Images">https://github.com/bulentsiyah/Coordinate-Calculation-from-Aerial-Images</a> </p>
<p>This study is an example for a simple computer vision. In the study, the coordinate point is based on calculating the coordinate information of another point on the picture by referring to a known place.</p>
<p>The only information required is to know what the centimeter equivalent of a pixel is on the picture.</p>
<p><a target="_blank" href="https://github.com/bulentsiyah/Coordinate-Calculation-from-Aerial-Images/blob/master/src/Figure_1.png"><img src="https://github.com/bulentsiyah/Coordinate-Calculation-from-Aerial-Images/raw/master/src/Figure_1.png" alt /></a></p>
<pre><code><span class="hljs-keyword">import</span> cv2 
<span class="hljs-keyword">import</span> matplotlib
<span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt
<span class="hljs-keyword">from</span> utils <span class="hljs-keyword">import</span> PixelLocation
<span class="hljs-keyword">from</span> gpsupdater <span class="hljs-keyword">import</span> GPSUpdater 
<span class="hljs-keyword">from</span> geocalculation <span class="hljs-keyword">import</span> GeoCalculation

def main():
    image = "src/istanbul_aerial_1pixel_10_centimeters.png"

    image = cv2.imread(image, <span class="hljs-number">1</span>)
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

    height, width = image.shape[:<span class="hljs-number">2</span>]
    print(width, height, "width, height")
    ratio_pixel_centimeters = <span class="hljs-number">10</span>

    selected_position = PixelLocation(
        label="GPS location known place",
        x_pixel=<span class="hljs-type">int</span>(width / <span class="hljs-number">2</span>) ,
        y_pixel=<span class="hljs-number">480</span>,
        latitude=<span class="hljs-number">28.971832514674194</span>,
        longitude= <span class="hljs-number">41.01209534207387</span>,
    )

    calculation_position = PixelLocation(
        label="GPS location, location to be calculated",
        x_pixel=<span class="hljs-type">int</span>(width / <span class="hljs-number">2</span>) + <span class="hljs-number">1000</span>, # <span class="hljs-number">1000</span> / ratio_pixel_centimeters =  <span class="hljs-number">100</span> meters east
        y_pixel=<span class="hljs-number">480</span> + <span class="hljs-number">2000</span>, # <span class="hljs-number">2000</span> / ratio_pixel_centimeters = <span class="hljs-number">200</span> meters south
        latitude=<span class="hljs-number">0.0</span>,
        longitude= <span class="hljs-number">0.0</span>,
    )


    print(<span class="hljs-string">'selected_position x:{0}, y:{1}, calculation_position x:{2}, y:{3}'</span>.format(selected_position.x_pixel,
    selected_position.y_pixel, 
    calculation_position.x_pixel,
    calculation_position.y_pixel))

    text_selected = <span class="hljs-string">'selected_position latitude:{0:.7f}, longitude:{1:.7f}, '</span>.format(selected_position.latitude,
    selected_position.longitude)
    print(text_selected)

    distance, bearing = GPSUpdater.distance_bearing_calculator_using_parameters(destination_x=calculation_position.x_pixel,
                                                                                     destination_y=calculation_position.y_pixel,
                                                                                    source_x=selected_position.x_pixel,
                                                                                    source_y=selected_position.y_pixel,
                                                                                    image_height=height,
                                                                                    pixel_in_centimeters=ratio_pixel_centimeters)

    print(<span class="hljs-string">'distance:{0} bearing:{1}'</span>.format(distance,bearing))

    calculation_position.latitude, calculation_position.longitude = GeoCalculation.calculate_new_gps_position(lat1=selected_position.latitude,
    lon1=selected_position.longitude,distance=distance,bearing=bearing)


    text_calculation = <span class="hljs-string">'calculation_position latitude:{0:.7f}, longitude:{1:.7f}'</span>.format(calculation_position.latitude,calculation_position.longitude)
    print(text_calculation)


    start_point = (selected_position.x_pixel, selected_position.y_pixel)
    end_point = (calculation_position.x_pixel, calculation_position.y_pixel )

    thickness = <span class="hljs-number">10</span>
    color = (<span class="hljs-number">255</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>) 
    image = cv2.line(image, start_point, end_point, color, thickness)


    org_y = <span class="hljs-number">200</span>
    org_x = <span class="hljs-number">10</span>
    org = (org_x, org_y) # x:<span class="hljs-keyword">column</span> y:<span class="hljs-keyword">row</span> 
    font = cv2.FONT_HERSHEY_SIMPLEX
    fontScale = <span class="hljs-number">3</span>
    color = (<span class="hljs-number">255</span>, <span class="hljs-number">255</span>, <span class="hljs-number">0</span>) 
    paddingText = <span class="hljs-number">50</span>

    cv2.putText(image, text_selected, org, font, fontScale, color, thickness, cv2.LINE_AA) 

    org_y = org_y+<span class="hljs-number">3</span>*paddingText+thickness
    org_x = <span class="hljs-number">10</span>
    org = (org_x, org_y)
    cv2.putText(image,text_calculation , org, font, fontScale, color, thickness, cv2.LINE_AA)

    org_y = org_y+<span class="hljs-number">3</span>*paddingText+thickness
    org_x = <span class="hljs-number">10</span>
    org = (org_x, org_y)
    cv2.putText(image,<span class="hljs-string">'different distance:{0:.2f} meters'</span>.format(distance) , org, font, fontScale, color, thickness, cv2.LINE_AA)



    plt.figure(figsize=(<span class="hljs-number">10</span>, <span class="hljs-number">10</span>))
    plt.imshow(image)
    plt.<span class="hljs-keyword">show</span>()


<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">'__main__'</span>:
    main()
</code></pre>]]></content:encoded></item><item><title><![CDATA[Python Basics, Algorithms, Data Structures, Object Oriented Programming, Job Interview Questions]]></title><description><![CDATA[https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions 
Hello, this repo is a repository I've compiled with basic python exercises, algorithms, data structures, object-oriented prog...]]></description><link>https://www.bulentsiyah.com/python-basics-algorithms-data-structures-object-oriented-programming-job-interview-questions</link><guid isPermaLink="true">https://www.bulentsiyah.com/python-basics-algorithms-data-structures-object-oriented-programming-job-interview-questions</guid><category><![CDATA[python beginner]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Fri, 22 Jan 2021 15:57:48 GMT</pubDate><content:encoded><![CDATA[<p> <a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions">https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions</a> </p>
<p>Hello, this repo is a repository I've compiled with basic python exercises, algorithms, data structures, object-oriented programming, questions in job interviews (on data science, machine learning and deep learning), clean code and git usage.</p>
<p>You can find all the resources I used to create the repo in the reference section. Enjoy it</p>
<p>Table of Contents</p>
<ul>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#folders-and-files-tree-in-this-repo">Folders and Files Tree in this Repo</a><ul>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#git-handbook">📂Git Handbook</a></li>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#job-interview-questions">📂Job Interview Questions</a></li>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#object-oriented-programming">📂Object Oriented Programming</a></li>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#python">📂Python</a></li>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#clean-code">📂Clean Code</a></li>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#data-structures">📂Data Structures</a></li>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#algorithms">📂Algorithms</a></li>
</ul>
</li>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#-license">📝 License</a></li>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#-show-your-support">👨‍🚀 Show your support</a></li>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#references">References:</a><ul>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#clean-code-1">Clean Code</a></li>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#data-structures-1">Data Structures</a></li>
<li><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#interview_questions">Interview_Questions</a></li>
</ul>
</li>
</ul>
<h2 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questionsfolders-and-files-tree-in-this-repofolders-and-files-tree-in-this-repo"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#folders-and-files-tree-in-this-repo"></a>Folders and Files Tree in this Repo</h2>
<h3 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questionsgit-handbookgit-handbook"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#git-handbook"></a>📂Git Handbook</h3>
<p>┃📜git-cheat-sheet-education.pdf<br />┃📜github-git-cheat-sheet.pdf<br />┃ ┗ 📜SWTM-2088_Atlassian-Git-Cheatsheet.pdf</p>
<h3 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questionsjob-interview-questionsjob-interview-questions"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#job-interview-questions"></a>📂Job Interview Questions</h3>
<p>┃📜BecomingHumanCheatSheets.pdf<br />┃📜Data Science interview questions.pdf<br />┃📜Data-Science-Interview-Questions-and-Answers-General-.md<br />┃ ┗ 📜The Ultimate Guide to Machine Learning Job Interviews.pdf</p>
<h3 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questionsobject-oriented-programmingobject-oriented-programming"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#object-oriented-programming"></a>📂Object Oriented Programming</h3>
<p>┃📜abstract class.py<br />┃📜attributes-encapsulation-inheritance-overriding-polymorphism.py<br />┃📜encapsulation.py<br />┃📜inheritance.py<br />┃📜overriding.py<br />┃ ┗ 📜polymorphism.py</p>
<h3 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questionspythonpython"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#python"></a>📂Python</h3>
<p>┃ ┗ 📜python-exercise.ipynb</p>
<h3 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questionsclean-codeclean-code"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#clean-code"></a>📂Clean Code</h3>
<p>┃📜clean-architecture.md<br />┃📜clean_code.md<br />┃📜clean_code_summary.md<br />┃ ┗ 📜costemaxime_summary-of-clean-code-by-robert-c-martin.pdf</p>
<h3 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questionsdata-structuresdata-structures"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#data-structures"></a>📂Data Structures</h3>
<p>┃📂Arrays<br />┃ ┃📜Array (Dizi) Yapısı.ipynb<br />┃ ┃📜array big o.jpg<br />┃ ┃ ┗ 📜dynamicAr.jpg<br />┃📂Deque<br />┃ ┃📜circular-deque.py<br />┃ ┃ ┗ 📜deque.py<br />┃📂Linked-lists<br />┃ ┃📂circular-doubly-linked-list<br />┃ ┃ ┃📜list.py<br />┃ ┃ ┃ ┗ 📜node.py<br />┃ ┃📂circular-singly-linked-list<br />┃ ┃ ┃📜list.py<br />┃ ┃ ┃ ┗ 📜node.py<br />┃ ┃📂doubly-linked-list<br />┃ ┃ ┃📜list.py<br />┃ ┃ ┃ ┗ 📜node.py<br />┃ ┃ ┗ 📂singly-linked-list<br />┃ ┃ ┃📜list.py<br />┃ ┃ ┃ ┗ 📜node.py<br />┃ ┗ 📂Recursion<br />┃ ┃📜convert-number-iterative.py<br />┃ ┃📜convert-number.py<br />┃ ┃📜factorial.py<br />┃ ┃📜fibonacci-iterative.py<br />┃ ┃📜fibonacci-memoization.py<br />┃ ┃📜fibonacci-recursive-worst-solution.py<br />┃ ┃📜fibonacci-recursive.py<br />┃ ┃📜fibonacci-sum-recursive.py<br />┃ ┃📜maze.py<br />┃ ┃📜palindrome.py<br />┃ ┃📜reverse-linked-list-iterative.py<br />┃ ┃📜reverse-linked-list.py<br />┃ ┃📜reverse-list.py<br />┃ ┃📜reverse-string.py<br />┃ ┃📜stack.py<br />┃ ┃📜sum-numbers-binary-recursion.py<br />┃ ┃📜sum-numbers-pointer.py<br />┃ ┃📜sum-numbers-slicing.py<br />┃ ┃ ┗ 📜towers-of-hanoi.py</p>
<h3 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questionsalgorithmsalgorithms"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#algorithms"></a>📂Algorithms</h3>
<p>┃📂Graphs and Graph Algorithms<br />┃ ┃📂breadth-first-search<br />┃ ┃ ┃📜graph.py<br />┃ ┃ ┃📜main.py<br />┃ ┃📂cycle-detection<br />┃ ┃ ┃📂cycle-detection-directed-graph<br />┃ ┃ ┃ ┃📜Graph directed cycle.png<br />┃ ┃ ┃ ┃📜Graph directed no cycle.png<br />┃ ┃ ┃ ┃📜graph.py<br />┃ ┃ ┃📂cycle-detection-undirected-graph<br />┃ ┃ ┃ ┃📜Graph undirected.png<br />┃ ┃ ┃ ┃📜graph.py<br />┃ ┃📂depth-first-search<br />┃ ┃ ┃📂depth-first-search<br />┃ ┃ ┃ ┃📜graph.py<br />┃ ┃ ┃ ┃📜main.py<br />┃ ┃📂graphs<br />┃ ┃ ┃📂dijkstra<br />┃ ┃ ┃ ┃📂matrix-impl<br />┃ ┃ ┃ ┃ ┃📜graph.py<br />┃ ┃ ┃ ┃ ┃📜main.py<br />┃ ┃ ┃ ┃ ┗ 📂priority-queue-impl-adjacency-map<br />┃ ┃ ┃ ┃ ┃📜graph.py<br />┃ ┃ ┃ ┃ ┃📜main.py<br />┃ ┃ ┃ ┃ ┃📜priorityqueue.py<br />┃ ┃ ┃📂is-graph-bipartite<br />┃ ┃ ┃ ┃📜graph.py<br />┃ ┃ ┃ ┃📜main.py<br />┃ ┃ ┃ ┗ 📂prims-algorithm<br />┃ ┃ ┃ ┃📜graph.py<br />┃ ┃ ┃ ┃📜main.py<br />┃ ┃ ┃ ┃📜priorityqueue.py<br />┃ ┃ ┗ 📂topological-sorting<br />┃ ┃ ┃📜graph.py<br />┃ ┃ ┃ ┗ 📜main.py<br />┃📂Sorting and Searching<br />┃ ┃📂hashing<br />┃ ┃ ┃📜HashMap.py<br />┃ ┃ ┃ ┗ 📜HashMapChaining.py<br />┃ ┃📂searching<br />┃ ┃ ┃📂binary search<br />┃ ┃ ┃ ┃📜iterative.py<br />┃ ┃ ┃ ┃📜recursive-no-slicing.py<br />┃ ┃ ┃ ┃ ┗ 📜recursive.py<br />┃ ┃ ┃📂sequential search<br />┃ ┃ ┃ ┃📜ordered-list.py<br />┃ ┃ ┃ ┃ ┗ 📜unordered-list.py<br />┃ ┃ ┃📜binary-search-iterative.py<br />┃ ┃ ┃📜binary-search-recursive-pointers.py<br />┃ ┃ ┃📜binary-search-recursive.py<br />┃ ┃ ┃📜sequential-search-ordered-list.py<br />┃ ┃ ┃ ┗ 📜sequential-search-unordered-list.py<br />┃ ┃ ┗ 📂sorting<br />┃ ┃ ┃📂bubble sort<br />┃ ┃ ┃ ┃📜bubble-sort-recursive.py<br />┃ ┃ ┃ ┃📜bubble-sort.py<br />┃ ┃ ┃ ┃ ┗ 📜short-bubble.py<br />┃ ┃ ┃📂heapsort<br />┃ ┃ ┃ ┃📜binaryheap.py<br />┃ ┃ ┃ ┃ ┗ 📜main.py<br />┃ ┃ ┃📂insertion sort<br />┃ ┃ ┃ ┃ ┗ 📜insertion-sort.py<br />┃ ┃ ┃📂merge sort<br />┃ ┃ ┃ ┃📜merge-sort-return-list.py<br />┃ ┃ ┃ ┃ ┗ 📜merge-sort.py<br />┃ ┃ ┃📂quicksort<br />┃ ┃ ┃ ┃📜quick-sort-return-list.py<br />┃ ┃ ┃ ┃ ┗ 📜quicksort.py<br />┃ ┃ ┃ ┗ 📂selection sort<br />┃ ┃ ┃ ┃ ┗ 📜selection-sort.py<br />┃ ┗ 📂Trees and Tree Algorithms<br />┃ ┃📂avl tree<br />┃ ┃ ┃📜avl.py<br />┃ ┃ ┃ ┗ 📜treenode.py<br />┃ ┃📂binary heap<br />┃ ┃ ┃ ┗ 📜binary-heap.py<br />┃ ┃📂bst<br />┃ ┃ ┃📜bst.py<br />┃ ┃ ┃ ┗ 📜treenode.py<br />┃ ┃📂list representation<br />┃ ┃ ┃ ┗ 📜tree.py<br />┃ ┃📂nodes representation<br />┃ ┃ ┃📜exercise.py<br />┃ ┃ ┃ ┗ 📜tree.py<br />┃ ┃📂parse tree<br />┃ ┃ ┃📜main.py<br />┃ ┃ ┃📜stack.py<br />┃ ┃ ┃ ┗ 📜tree.py<br />┃ ┃📂tree<br />┃ ┃ ┃ ┗ 📜tree.py<br />┃ ┃ ┗ 📂tree traversal<br />┃ ┃ ┃📜exercise01-methods.py<br />┃ ┃ ┃📜exercise02-functions.py<br />┃ ┃ ┃📜exercise03-postorder.py<br />┃ ┃ ┃📜exercise04-inorder.py<br />┃ ┃ ┃📜preorder-indentation.py<br />┃ ┃ ┃📜stack.py<br />┃ ┃ ┃ ┗ 📜tree.py</p>
<p><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions/blob/master/.images/python_basic.png"><img src="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions/raw/master/.images/python_basic.png" alt /></a></p>
<h2 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questions-license-license"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#-license"></a>📝 License</h2>
<p>This project is licensed under <a target="_blank" href="https://opensource.org/licenses/MIT">MIT</a> license.</p>
<h2 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questions-show-your-support-show-your-support"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#-show-your-support"></a>👨‍🚀 Show your support</h2>
<p>Give a ⭐️ if this project helped you!</p>
<h2 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questionsreferencesreferences"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#references"></a>References:</h2>
<p><a target="_blank" href="https://www.udemy.com/course/object-oriented-programming-masterclass-with-python-a-z/">https://www.udemy.com/course/object-oriented-programming-masterclass-with-python-a-z/</a></p>
<h3 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questionsclean-code-1clean-code"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#clean-code-1"></a>Clean Code</h3>
<p><a target="_blank" href="https://gist.github.com/wojteklu/73c6914cc446146b8b533c0988cf8d29">https://gist.github.com/wojteklu/73c6914cc446146b8b533c0988cf8d29</a> <a target="_blank" href="https://gist.github.com/leeweiminsg/b31495b05136a29ceff86f5c4967a697">https://gist.github.com/leeweiminsg/b31495b05136a29ceff86f5c4967a697</a> <a target="_blank" href="https://gist.github.com/scottashipp/88b3a4d97eaa542842bcf5b08f5bac6d">https://gist.github.com/scottashipp/88b3a4d97eaa542842bcf5b08f5bac6d</a> <a target="_blank" href="https://gist.github.com/vaibhavpaliwal/508f4e67f7fd36209f2d92455b39de85">https://gist.github.com/vaibhavpaliwal/508f4e67f7fd36209f2d92455b39de85</a> <a target="_blank" href="https://gist.github.com/jonnyjava/4f615567f0b55d361654">https://gist.github.com/jonnyjava/4f615567f0b55d361654</a> <a target="_blank" href="https://gist.github.com/zhehaowang/b6c9517dc690054670c8638f18a68a42">https://gist.github.com/zhehaowang/b6c9517dc690054670c8638f18a68a42</a> <a target="_blank" href="https://www.youtube.com/playlist?list=PLxw2ybf4zPJ5TncW4_IWqFSGGybTaXs5I">https://www.youtube.com/playlist?list=PLxw2ybf4zPJ5TncW4_IWqFSGGybTaXs5I</a></p>
<h3 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questionsdata-structures-1data-structures"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#data-structures-1"></a>Data Structures</h3>
<p><a target="_blank" href="https://www.udemy.com/course/algorithms-data-structures-and-real-life-python-problems/">https://www.udemy.com/course/algorithms-data-structures-and-real-life-python-problems/</a> <a target="_blank" href="https://github.com/Hemant-Jain-Author/Problem-Solving-in-Data-Structures-Algorithms-using-Python">https://github.com/Hemant-Jain-Author/Problem-Solving-in-Data-Structures-Algorithms-using-Python</a> <a target="_blank" href="https://github.com/ivanmmarkovic/Problem-Solving-with-Algorithms-and-Data-Structures-using-Python">https://github.com/ivanmmarkovic/Problem-Solving-with-Algorithms-and-Data-Structures-using-Python</a> <a target="_blank" href="https://github.com/OmkarPathak/Data-Structures-using-Python/tree/master/Arrays">https://github.com/OmkarPathak/Data-Structures-using-Python/tree/master/Arrays</a></p>
<h3 id="httpsgithubcombulentsiyahpython-basics-algorithms-data-structures-object-oriented-programming-job-interview-questionsinterviewquestionsinterviewquestions"><a target="_blank" href="https://github.com/bulentsiyah/Python-Basics-Algorithms-Data-Structures-Object-Oriented-Programming-Job-Interview-Questions#interview_questions"></a>Interview_Questions</h3>
<p><a target="_blank" href="https://github.com/JifuZhao/120-DS-Interview-Questions/blob/master/DataScience_Interview_Questions.pdf">https://github.com/JifuZhao/120-DS-Interview-Questions/blob/master/DataScience_Interview_Questions.pdf</a> <a target="_blank" href="https://gist.github.com/felipemoraes/c423d1447ee13585e2270b27f174fb13">https://gist.github.com/felipemoraes/c423d1447ee13585e2270b27f174fb13</a> <a target="_blank" href="https://github.com/rbhatia46/Data-Science-Interview-Resources">https://github.com/rbhatia46/Data-Science-Interview-Resources</a> <a target="_blank" href="https://github.com/conordewey3/DS-Career-Resources/blob/master/Interview-Resources.md">https://github.com/conordewey3/DS-Career-Resources/blob/master/Interview-Resources.md</a> <a target="_blank" href="https://github.com/khanhnamle1994/cracking-the-data-science-interview/blob/master/DataScience%20Interview%20Questions.pdf">https://github.com/khanhnamle1994/cracking-the-data-science-interview/blob/master/DataScience%20Interview%20Questions.pdf</a> <a target="_blank" href="https://github.com/khanhnamle1994/cracking-the-data-science-interview">https://github.com/khanhnamle1994/cracking-the-data-science-interview</a> <a target="_blank" href="https://github.com/zhiqiangzhongddu/Data-Science-Interview-Questions-and-Answers-General-">https://github.com/zhiqiangzhongddu/Data-Science-Interview-Questions-and-Answers-General-</a> <a target="_blank" href="https://www.interviewbit.com/python-interview-questions/">https://www.interviewbit.com/python-interview-questions/</a> <a target="_blank" href="https://www.techbeamers.com/python-interview-questions-programmers/">https://www.techbeamers.com/python-interview-questions-programmers/</a> <a target="_blank" href="https://www.youtube.com/watch?v=HGXlFG_Rz4E&amp;ab_channel=edureka%21">https://www.youtube.com/watch?v=HGXlFG_Rz4E&amp;ab_channel=edureka%21</a></p>
]]></content:encoded></item><item><title><![CDATA[Preprocessing RGB image Masks to Segmentation Masks]]></title><description><![CDATA[Hi, with this kernel, I will transfer your RGB color image masks to pre-process and make it ready for the semantic segmentation model.
The first step in training our segmentation model is to prepare the dataset. We would need the input RGB images and...]]></description><link>https://www.bulentsiyah.com/preprocessing-rgb-image-masks-to-segmentation-masks</link><guid isPermaLink="true">https://www.bulentsiyah.com/preprocessing-rgb-image-masks-to-segmentation-masks</guid><category><![CDATA[Python]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Mon, 11 Jan 2021 19:16:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1611256633967/XP_fmdtiS.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi, with this kernel, I will transfer your RGB color image masks to pre-process and make it ready for the semantic segmentation model.</p>
<p>The first step in training our segmentation model is to prepare the dataset. We would need the input RGB images and the corresponding segmentation images. If you want to make your own dataset, a tool like labelme or GIMP can be used to manually generate the ground truth segmentation masks. Assign each class a unique ID. In the segmentation images, the pixel value should denote the class ID of the corresponding pixel. <a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras">You can examine for Deep Learning based Semantic Segmentation example</a></p>
<p><img src="https://divamgupta.com/assets/images/posts/imgseg/image14.png?style=centerme" alt /></p>
<p>If you have RGB color image masks (like <a target="_blank" href="https://www.kaggle.com/bulentsiyah/semantic-drone-dataset">Aerial Semantic Segmentation Drone Dataset RGB_color_masks Folder</a> ) , you can follow the steps below. First of all, our goal is to obtain a pixel-based class image from RGB color image masks as follows.RGB color image maskspixel-based class image masks</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5181549%2Ffbd1df30b8700a4f0c6581cd2c3a5a8d%2F040.png?generation=1610278890444345&amp;alt=media" alt /><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5181549%2F7a82aff10c03e2ec9f325b0651c53bfe%2F040.png?generation=1610278931040183&amp;alt=media" alt /></p>
<p>The dataset has RGB colored segmentation masks, and there are 24 classes in the dataset. The dataset has a file named class_dict.csv and it contains the name of the classes and their respective RGB values. The contents of the file are as follows:</p>
<table>
<thead>
<tr>
<td>name</td><td>r</td><td>g</td><td>b</td></tr>
</thead>
<tbody>
<tr>
<td>unlabeled</td><td>0</td><td>0</td><td>0</td></tr>
<tr>
<td>paved-area</td><td>128</td><td>64</td><td>128</td></tr>
<tr>
<td>dirt</td><td>130</td><td>76</td><td>0</td></tr>
<tr>
<td>grass</td><td>0</td><td>102</td><td>0</td></tr>
<tr>
<td>gravel</td><td>112</td><td>103</td><td>87</td></tr>
<tr>
<td>water</td><td>28</td><td>42</td><td>168</td></tr>
<tr>
<td>rocks</td><td>48</td><td>41</td><td>30</td></tr>
<tr>
<td>pool</td><td>0</td><td>50</td><td>89</td></tr>
<tr>
<td>vegetation</td><td>107</td><td>142</td><td>35</td></tr>
<tr>
<td>roof</td><td>70</td><td>70</td><td>70</td></tr>
<tr>
<td>wall</td><td>102</td><td>102</td><td>156</td></tr>
<tr>
<td>window</td><td>254</td><td>228</td><td>12</td></tr>
<tr>
<td>door</td><td>254</td><td>148</td><td>12</td></tr>
<tr>
<td>fence</td><td>190</td><td>153</td><td>153</td></tr>
<tr>
<td>fence-pole</td><td>153</td><td>153</td><td>153</td></tr>
<tr>
<td>person</td><td>255</td><td>22</td><td>96</td></tr>
<tr>
<td>dog</td><td>102</td><td>51</td><td>0</td></tr>
<tr>
<td>car</td><td>9</td><td>143</td><td>150</td></tr>
<tr>
<td>bicycle</td><td>119</td><td>11</td><td>32</td></tr>
<tr>
<td>tree</td><td>51</td><td>51</td><td>0</td></tr>
<tr>
<td>bald-tree</td><td>190</td><td>250</td><td>190</td></tr>
<tr>
<td>ar-marker</td><td>112</td><td>150</td><td>146</td></tr>
<tr>
<td>obstacle</td><td>2</td><td>135</td><td>115</td></tr>
<tr>
<td>conflicting</td><td>255</td><td>0</td><td>0</td></tr>
</tbody>
</table>
<pre><code><span class="hljs-keyword">import</span> os,re
<span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
<span class="hljs-keyword">import</span> cv2
<span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt
%matplotlib <span class="hljs-keyword">inline</span>

def atoi(<span class="hljs-type">text</span>) : 
    <span class="hljs-keyword">return</span> <span class="hljs-type">int</span>(<span class="hljs-type">text</span>) <span class="hljs-keyword">if</span> <span class="hljs-type">text</span>.isdigit() <span class="hljs-keyword">else</span> <span class="hljs-type">text</span>

def natural_keys(<span class="hljs-type">text</span>) :
    <span class="hljs-keyword">return</span> [atoi(c) <span class="hljs-keyword">for</span> c <span class="hljs-keyword">in</span> re.split(<span class="hljs-string">'(\d+)'</span>, <span class="hljs-type">text</span>)]


read_csv = pd.read_csv(<span class="hljs-string">'/kaggle/input/semantic-drone-dataset/class_dict_seg.csv'</span>,index_col=<span class="hljs-keyword">False</span>,skipinitialspace=<span class="hljs-keyword">True</span>)
read_csv.head()
</code></pre><pre><code>
def segmantation_images(path,new_path, debug_test_num):
    filenames = []

    <span class="hljs-keyword">for</span> root, dirnames, filenames in os.walk(path):
        filenames.sort(key = natural_keys)
        rootpath = root

    <span class="hljs-comment">#print(filenames)</span>
    count = <span class="hljs-number">0</span>
    <span class="hljs-keyword">for</span> item in filenames:

        <span class="hljs-keyword">if</span> debug_test_num !=<span class="hljs-number">0</span>:
            <span class="hljs-keyword">if</span> debug_test_num &lt;= count:
                <span class="hljs-keyword">break</span>

        count = count + <span class="hljs-number">1</span>

        <span class="hljs-keyword">if</span> os.path.isfile(path+item):
            f, e = os.path.splitext(item)
            image_rgb = Image.open(path+item)
            image_rgb = np.asarray(image_rgb)
            new_image = np.zeros((image_rgb.shape[<span class="hljs-number">0</span>],image_rgb.shape[<span class="hljs-number">1</span>],<span class="hljs-number">3</span>)).astype(<span class="hljs-string">'int'</span>)

            <span class="hljs-keyword">for</span> <span class="hljs-keyword">index</span>, row  in read_csv.iterrows():
                new_image[(image_rgb[:,:,<span class="hljs-number">0</span>]==row.r)&amp;
                          (image_rgb[:,:,<span class="hljs-number">1</span>]==row.g)&amp;
                          (image_rgb[:,:,<span class="hljs-number">2</span>]==row.b)]=np.array([<span class="hljs-keyword">index</span>+<span class="hljs-number">1</span>,<span class="hljs-keyword">index</span>+<span class="hljs-number">1</span>,<span class="hljs-keyword">index</span>+<span class="hljs-number">1</span>]).reshape(<span class="hljs-number">1</span>,<span class="hljs-number">3</span>)

            new_image = new_image[:,:,<span class="hljs-number">0</span>]
            output_filename = new_path+f+<span class="hljs-string">'.png'</span>
            cv2.imwrite(output_filename,new_image)
            <span class="hljs-keyword">print</span>(<span class="hljs-string">'writing file: '</span>,output_filename)

        <span class="hljs-keyword">else</span>:
            <span class="hljs-keyword">print</span>(<span class="hljs-string">'no file'</span>)

    <span class="hljs-keyword">print</span>(<span class="hljs-string">"number of files written: "</span>,count)
</code></pre><pre><code>debug_test_num = 5 <span class="hljs-comment"># 5 samples are enough just to see it working. If you set 0, you can do all the datasets.</span>
segmantation_images(<span class="hljs-string">"/kaggle/input/semantic-drone-dataset/RGB_color_image_masks/RGB_color_image_masks/"</span>, <span class="hljs-string">""</span>, debug_test_num=debug_test_num)
</code></pre><pre><code><span class="hljs-attribute">from</span> PIL import Image
<span class="hljs-attribute">input_image</span> = <span class="hljs-string">"/kaggle/input/semantic-drone-dataset/RGB_color_image_masks/RGB_color_image_masks/"</span>+<span class="hljs-string">"002.png"</span>
<span class="hljs-attribute">output_image</span> = <span class="hljs-string">"002.png"</span>

<span class="hljs-attribute">fig</span>, axs = plt.subplots(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, figsize=(<span class="hljs-number">16</span>, <span class="hljs-number">8</span>), constrained_layout=True)

<span class="hljs-attribute">img_orig</span> = Image.open(input_image)
<span class="hljs-attribute">axs</span>[<span class="hljs-number">0</span>].imshow(img_orig)
<span class="hljs-attribute">axs</span>[<span class="hljs-number">0</span>].grid(False)

<span class="hljs-attribute">output_image</span> = Image.open(output_image)
<span class="hljs-attribute">out_array</span> = np.asarray(output_image)
<span class="hljs-attribute">axs</span>[<span class="hljs-number">1</span>].imshow(out_array)
<span class="hljs-attribute">axs</span>[<span class="hljs-number">1</span>].grid(False)
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/51585872/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..Oe3q3YOyAKLD16nuUO1vHg.34_oHnvTPWOxhMuuJJ8uSYwejvM1lYnElXsx_bWjQEq_oRGVtYyxOPoANzi4AC6LDhS0xzzGss7KVvkjHH8RTsnhZXpnUMqzPMv9Axl4O2Yyp36BCVj84YoGrfxUMQb31j3t3Y85oMeL28uLiY9eOmd80ZjSbxPxppVklVjSFY0hoyzxkx2_hIR7o8yQdCGM2n3bMU0MFxS10MuhIKP27gwuN3RDSR_zQCyqranxult-NVygI9f7cyWEXlel2I2DkEqtms3Q5JYBRsRzfignG3NqJ5j3Riat2bPwyftXSks7Px3v-ntM0_uL8Dq2FN0qVLY-N57-OC-4KomIUSQj4e-0-Sj4A8XFKj6pUyXwIvvejhvkVhiUMoS0HGhiaAAjIMWorSXEZLVvmydmqX7h4uZYjjwDQx1WWcrqNrGbco7V-5UKqf3dkcpTt9N_wSaLpOrdge9i63-g4j2GKYB-mo72feaSYRc7EE7BT3IVJ3fusMw2H8gcpKQ8MORYk0NE4X_JkTH8P5E01YrGIhWYBNm2FmKpy480y74IX3mYRUeeCU7juNUPYRycEeHAYDGMNr5MPr5fZOl_I0fw5aKfcIF2FRfebolwkIJuswW1ZjZT8uooqSQ-zzt8D6BOaW498MSbrwU3Guw1zA0xruOMiWfoFjtuLdfVitowiPSyisW9gp1wjyjX1u4WrffEGqrL.ZDj5GrVQ81LXulTG0aGbfw/__results___files/__results___6_0.png" alt /></p>
]]></content:encoded></item><item><title><![CDATA[ImageNet Winning CNN Architectures (ILSVRC)]]></title><description><![CDATA[Click to see this work on  My Kaggle Profile 
In this post, you will discover the ImageNet dataset, the ILSVRC, and the key milestones in image classification that have resulted from the competitions. This post has been prepared by making use of all ...]]></description><link>https://www.bulentsiyah.com/imagenet-winning-cnn-architectures-ilsvrc</link><guid isPermaLink="true">https://www.bulentsiyah.com/imagenet-winning-cnn-architectures-ilsvrc</guid><category><![CDATA[Deep Learning]]></category><category><![CDATA[image processing]]></category><category><![CDATA[neural networks]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Artificial Intelligence]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Sat, 09 May 2020 17:17:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1607821341525/GHF5lQBoD.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Click to see this work on  <a target="_blank" href="https://www.kaggle.com/getting-started/149448">My Kaggle Profile</a> </p>
<p>In this post, you will discover the ImageNet dataset, the ILSVRC, and the key milestones in <strong>image classification</strong> that have resulted from the competitions. This post has been prepared by making use of all the references below.
<img src="https://cdn.arstechnica.net/wp-content/uploads/2018/10/Screen-Shot-2018-10-12-at-4.24.35-PM-980x577.png" alt="." />
&gt; This slide from the ImageNet team shows the winning team's error rate each year in the top-5 classification task. The error rate fell steadily from 2010 to 2017</p>
<h1 id="imagenet-dataset">ImageNet Dataset</h1>
<p>ImageNet is a dataset of over 15 million labeled high-resolution images belonging to roughly 22,000 categories. The images were collected from the web and labeled by human labelers using Amazon’s Mechanical Turk crowd-sourcing tool. Starting in 2010, as part of the Pascal Visual Object Challenge, an annual competition called the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) has been held. ILSVRC uses a subset of ImageNet with roughly 1000 images in each of 1000 categories. In all, there are roughly 1.2 million training images, 50,000 validation images, and 150,000 testing images. </p>
<p>On ImageNet, it is customary to report two error rates: top-1 and top-5, where the top-5 error rate is the fraction of test images for which the correct label is not among the five labels considered most probable by the model. ImageNet consists of variable-resolution images, while our system requires a constant input dimensionality.</p>
<h1 id="imagenet-large-scale-visual-recognition-challenge-ilsvrc">ImageNet Large Scale Visual Recognition Challenge (ILSVRC)</h1>
<p>The general challenge tasks for most years are as follows:</p>
<ul>
<li>Image classification: Predict the classes of objects present in an image.</li>
<li>Single-object localization: Image classification + draw a bounding box around one example of each object present.</li>
<li>Object detection: Image classification + draw a bounding box around each object present.</li>
</ul>
<p><img src="https://3qeqpr26caki16dnhd19sv6by6v-wpengine.netdna-ssl.com/wp-content/uploads/2019/02/Summary-of-the-Improvement-on-ILSVRC-Tasks-over-the-First-Five-Years-of-the-Competition-1024x291.png" alt="." />
&gt; Summary of the Improvement on ILSVRC Tasks Over the First Five Years of the Competition. Taken from ImageNet Large Scale Visual Recognition Challenge, 2015</p>
<h1 id="deep-learning-milestones-from-ilsvrc">Deep Learning Milestones From ILSVRC</h1>
<p>The pace of improvement in the first five years of the ILSVRC was dramatic, perhaps even shocking to the field of computer vision. Success has primarily been achieved by large (deep) convolutional neural networks (CNNs) on graphical processing unit (GPU) hardware, which sparked an interest in deep learning that extended beyond the field out into the mainstream.</p>
<h2 id="ilsvrc-2012">ILSVRC-2012</h2>
<h3 id="alexnet-supervision">AlexNet (SuperVision)</h3>
<p><img src="https://iq.opengenus.org/content/images/2019/01/alexnet-1.png" alt />
<img src="https://miro.medium.com/max/1316/1*BASjitcB1kbfc0LH-Jtwjw.png" alt /></p>
<p>On 30 September 2012, a convolutional neural network (CNN) called AlexNet achieved a top-5 error of 15.3% in the ImageNet 2012 Challenge, more than 10.8 percentage points lower than that of the runner up. This was made feasible due to the use of Graphics processing units (GPUs) during training, an essential ingredient of the deep learning revolution. According to The Economist, "Suddenly people started to pay attention, not just within the AI community but across the technology industry as a whole.</p>
<ul>
<li><a target="_blank" href="https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks">ImageNet Classification with Deep Convolutional Neural Networks</a>, 2012. (Authors: Alex Krizhevsky, Ilya Sutskever, Geoffrey Hinton. University of Toronto, Canada.)</li>
<li>With 60M parameters, AlexNet has 8 layers — 5 convolutional and 3 fully-connected.</li>
<li>They were the first to implement Rectified Linear Units (ReLUs) as activation functions.</li>
</ul>
<h2 id="ilsvrc-2013">ILSVRC-2013</h2>
<h3 id="zfnet-clarifai">ZFNet (Clarifai)</h3>
<p><img src="https://iq.opengenus.org/content/images/2019/01/zfnet.png" alt />
<img src="https://cdn-images-1.medium.com/freeze/max/1000/1*qS5yiOWELCf9q0W7igfz6g.png?q=20" alt /></p>
<p>Matthew Zeiler and Rob Fergus propose a variation of AlexNet generally referred to as ZFNet in their 2013 paper titled “<a target="_blank" href="https://arxiv.org/abs/1311.2901">Visualizing and Understanding Convolutional Networks</a>,” a variation of which won the ILSVRC-2013 image classification task.</p>
<h2 id="ilsvrc-2014">ILSVRC-2014</h2>
<p><img src="https://cdn-images-1.medium.com/freeze/max/1000/1*smaJBYed3MSsqDodgnJnXw.png?q=20" alt /></p>
<h3 id="inception-googlenet">Inception (GoogLeNet)</h3>
<p><img src="https://iq.opengenus.org/content/images/2019/01/googlenet-1.png" alt /></p>
<p>Christian Szegedy, et al. from Google achieved top results for object detection with their GoogLeNet model that made use of the inception module and architecture. This approach was described in their 2014 paper titled “<a target="_blank" href="https://arxiv.org/abs/1409.4842">Going Deeper with Convolutions</a>.”
Introduced the Inception Module, which emphasized that the layers of a CNN doesn't always have to be stacked up sequentially. The winner of ILSVRC 2014 with an error rate of 6.7%.</p>
<h3 id="vgg">VGG</h3>
<p><img src="https://iq.opengenus.org/content/images/2019/01/vgg.png" alt />
Karen Simonyan and Andrew Zisserman from the Oxford Vision Geometry Group (VGG) achieved top results for image classification and localization with their VGG model. Their approach is described in their 2015 paper titled “<a target="_blank" href="https://arxiv.org/abs/1409.1556">Very Deep Convolutional Networks for Large-Scale Image Recognition</a>.”. 
The folks at Visual Geometry Group (VGG) invented the VGG-16 which has 13 convolutional and 3 fully-connected layers, carrying with them the ReLU tradition from AlexNet. This network stacks more layers onto AlexNet, and use smaller size filters (2×2 and 3×3). It consists of 138M parameters and takes up about 500MB of storage space  They also designed a deeper variant, VGG-19.</p>
<h2 id="ilsvrc-2015">ILSVRC-2015</h2>
<h3 id="resnet-msra">ResNet (MSRA)</h3>
<p><img src="https://iq.opengenus.org/content/images/2019/01/resnet.png" alt />
<img src="https://miro.medium.com/max/1400/1*IGgSqXFauzbeJtZJ6CBPbg.png" alt />
Kaiming He, et al. from Microsoft Research achieved top results for object detection and object detection with localization tasks with their Residual Network or ResNet described in their 2015 paper titled “Deep Residual Learning for Image Recognition.”
An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. Ultra-deep (quoting the authors) architecture with 152 layers. Introduced the Residual Block, to reduce overfitting</p>
<h2 id="ilsvrc-2016">ILSVRC-2016</h2>
<h3 id="resnext">ResNeXt</h3>
<p><img src="https://miro.medium.com/max/1400/1*RHpn70qFNCcqyVjkPdFtGA.png" alt />
<img src="https://miro.medium.com/max/1400/1*LOoc11tkDoqv0pC6OH7mwA.png" alt /></p>
<p>The model name, ResNeXt, contains Next. It means the next dimension, on top of the ResNet. This next dimension is called the “cardinality” dimension. And ResNeXt becomes the 1st Runner Up of ILSVRC classification task.</p>
<h2 id="ilsvrc-2017">ILSVRC-2017</h2>
<h3 id="senet">SENet</h3>
<p><img src="https://miro.medium.com/max/2000/1*7CHDHQ2hNuwIwNEdW0Z-PA.png" alt />
<img src="https://miro.medium.com/max/2000/1*jUn4ojyEVxqPdM-vDV63IA.png" alt /></p>
<p>With “Squeeze-and-Excitation” (SE) block that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels, SENet is constructed. And it won the first place in ILSVRC 2017 classification challenge with top-5 error to 2.251% which has about 25% relative improvement over the winning entry of 2016. And this is a paper in 2018 CVPR with more than 600 citations. Recently, it is also published in 2019 TPAMI. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1200915%2F676191aeaac9521bfaddfba85c8fcd99%2Fson1.PNG?generation=1588927378961883&amp;alt=media" alt /></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1200915%2F02df2c3aacafc57c467c50493c814fd8%2Fson2.PNG?generation=1588927396203209&amp;alt=media" alt /></p>
<p>Thank you so much if you have read so far. I have always wondered how ImageNet is progressing. I hope it benefits everyone who reads. </p>
<p>References:</p>
<ul>
<li>http://www.cs.toronto.edu/~hinton/absps/imagenet.pdf</li>
<li>https://machinelearningmastery.com/introduction-to-the-imagenet-large-scale-visual-recognition-challenge-ilsvrc/</li>
<li>https://arxiv.org/pdf/1409.0575.pdf</li>
<li>http://image-net.org/challenges/LSVRC/2017/index</li>
<li>https://en.wikipedia.org/wiki/ImageNet</li>
<li>https://towardsdatascience.com/illustrated-10-cnn-architectures-95d78ace614d</li>
<li>https://www.codesofinterest.com/2017/07/milestones-of-deep-learning.html</li>
<li>http://image-net.org/challenges/LSVRC/2016/results</li>
<li>https://towardsdatascience.com/review-trimps-soushen-winner-in-ilsvrc-2016-image-classification-dfbc423111dd</li>
<li>https://towardsdatascience.com/review-resnext-1st-runner-up-of-ilsvrc-2016-image-classification-15d7f17b42ac</li>
<li>http://image-net.org/challenges/LSVRC/2017/results</li>
<li>https://towardsdatascience.com/review-senet-squeeze-and-excitation-network-winner-of-ilsvrc-2017-image-classification-a887b98b2883</li>
<li>https://medium.com/coinmonks/paper-review-of-alexnet-caffenet-winner-in-ilsvrc-2012-image-classification-b93598314160</li>
<li>https://mc.ai/paper-review-of-zfnet-the-winner-of-ilsvlc-2013-image-classification/</li>
<li>http://image-net.org/challenges/talks_2017/imagenet_ilsvrc2017_v1.0.pdf</li>
<li>https://iq.opengenus.org/evolution-of-cnn-architectures/</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How to use the Python programming Language for Time Series Analysis!]]></title><description><![CDATA[Click to see this work on  My Kaggle Profile 
This work was prepared together with Gul Bulut and Bulent Siyah. The whole study consists of two parties

Time Series Forecasting and Analysis- Part 1
Time Series Forecasting and Analysis- Part 2

This ke...]]></description><link>https://www.bulentsiyah.com/time-series-forecasting-and-analysis</link><guid isPermaLink="true">https://www.bulentsiyah.com/time-series-forecasting-and-analysis</guid><category><![CDATA[Deep Learning]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Wed, 06 May 2020 18:55:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1611255750818/T0ITulGCy.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Click to see this work on  <a target="_blank" href="https://www.kaggle.com/bulentsiyah">My Kaggle Profile</a> </p>
<p>This work was prepared together with <a target="_blank" href="https://www.kaggle.com/gulyvz">Gul Bulut</a> and <a target="_blank" href="https://www.kaggle.com/bulentsiyah/">Bulent Siyah</a>. <strong>The whole study consists of two parties</strong></p>
<ul>
<li><a target="_blank" href="https://www.kaggle.com/gulyvz/time-series-forecasting-and-analysis-part-1">Time Series Forecasting and Analysis- Part 1</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2">Time Series Forecasting and Analysis- Part 2</a></li>
</ul>
<p>This kernel will teach you everything you need to know to use Python for forecasting time series data to predict new future data points.</p>
<p><img src="https://iili.io/JaZxFS.png" alt /></p>
<p>we'll learn about state of the art Deep Learning techniques with Recurrent Neural Networks that use deep learning to forecast future data points.</p>
<p><img src="https://iili.io/JaZCMl.png" alt /></p>
<p>This kernel even covers Facebook's Prophet library, a simple to use, yet powerful Python library developed to forecast into the future with time series data.</p>
<p><img src="https://iili.io/JaZnP2.png" alt /></p>
<h1 id="content-part-1httpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2content-part-1"><strong>Content Part 1</strong><a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Content-Part-1"></a></h1>
<ol>
<li><a target="_blank" href="https://www.kaggle.com/gulyvz/time-series-forecasting-and-analysis-part-1#1.">How to Work with Time Series Data with Pandas</a></li>
<li><a target="_blank" href="https://www.kaggle.com/gulyvz/time-series-forecasting-and-analysis-part-1#2.">Use Statsmodels to Analyze Time Series Data</a></li>
<li><a target="_blank" href="https://www.kaggle.com/gulyvz/time-series-forecasting-and-analysis-part-1#3.">General Forecasting Models - ARIMA(Autoregressive Integrated Moving Average)</a></li>
<li><a target="_blank" href="https://www.kaggle.com/gulyvz/time-series-forecasting-and-analysis-part-1#4.">General Forecasting Models - SARIMA(Seasonal Autoregressive Integrated Moving Average)</a></li>
<li><a target="_blank" href="https://www.kaggle.com/gulyvz/time-series-forecasting-and-analysis-part-1#5.">General Forecasting Models - SARIMAX</a></li>
</ol>
<h1 id="content-part-2httpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2content-part-2"><strong>Content Part 2</strong><a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Content-Part-2"></a></h1>
<ol>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#1.">Deep Learning for Time Series Forecasting - (RNN)</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#2.">Multivariate Time Series with RNN</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#3.">Use Facebook's Prophet Library for forecasting</a></li>
</ol>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#1.Deep-Learning-for-Time-Series-Forecasting---(RNN"></a>)</p>
<p>In [1]:</p>
<pre><code><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np %matplotlib <span class="hljs-keyword">inline</span> <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt 
</code></pre><p>In [2]:</p>
<pre><code><span class="hljs-attr">df</span> = pd.read_csv(<span class="hljs-string">'/kaggle/input/for-simple-exercises-time-series-forecasting/Alcohol_Sales.csv'</span>,index_col=<span class="hljs-string">'DATE'</span>,parse_dates=<span class="hljs-literal">True</span>) df.index.freq = <span class="hljs-string">'MS'</span> df.head() 
</code></pre><p>Out[2]:</p>
<p>S4248SM144NCEN</p>
<p>DATE</p>
<p>1992-01-013459</p>
<p>1992-02-013458</p>
<p>1992-03-014002</p>
<p>1992-04-014564</p>
<p>1992-05-014221</p>
<p>In [3]:</p>
<pre><code><span class="hljs-attr">df.columns</span> = [<span class="hljs-string">'Sales'</span>] df.plot(figsize=(<span class="hljs-number">12</span>,<span class="hljs-number">8</span>)) 
</code></pre><p>Out[3]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.axes</span><span class="hljs-selector-class">._subplots</span><span class="hljs-selector-class">.AxesSubplot</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7f708742e048</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___4_1.png" alt /></p>
<p>In [4]:</p>
<pre><code><span class="hljs-keyword">from</span> statsmodels.tsa.seasonal <span class="hljs-keyword">import</span> seasonal_decompose results = seasonal_decompose(df[<span class="hljs-string">'Sales'</span>]) results.observed.plot(figsize=(<span class="hljs-number">12</span>,<span class="hljs-number">2</span>)) 
</code></pre><p>Out[4]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.axes</span><span class="hljs-selector-class">._subplots</span><span class="hljs-selector-class">.AxesSubplot</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7f70751227f0</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___5_1.png" alt /></p>
<p>In [5]:</p>
<pre><code><span class="hljs-attribute">results</span>.trend.plot(figsize=(<span class="hljs-number">12</span>,<span class="hljs-number">2</span>)) 
</code></pre><p>Out[5]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.axes</span><span class="hljs-selector-class">._subplots</span><span class="hljs-selector-class">.AxesSubplot</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7f7074e84550</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___6_1.png" alt /></p>
<p>In [6]:</p>
<pre><code><span class="hljs-attribute">results</span>.seasonal.plot(figsize=(<span class="hljs-number">12</span>,<span class="hljs-number">2</span>)) 
</code></pre><p>Out[6]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.axes</span><span class="hljs-selector-class">._subplots</span><span class="hljs-selector-class">.AxesSubplot</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7f7074da3f28</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___7_1.png" alt /></p>
<p>In [7]:</p>
<pre><code><span class="hljs-attribute">results</span>.resid.plot(figsize=(<span class="hljs-number">12</span>,<span class="hljs-number">2</span>)) 
</code></pre><p>Out[7]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.axes</span><span class="hljs-selector-class">._subplots</span><span class="hljs-selector-class">.AxesSubplot</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7f7074d3a390</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___8_1.png" alt /></p>
<h2 id="train-test-splithttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2train-test-split">Train Test Split<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Train-Test-Split"></a></h2>
<p>In [8]:</p>
<pre><code><span class="hljs-built_in">print</span>(<span class="hljs-string">"len(df)"</span>, <span class="hljs-built_in">len</span>(df)) train = df.iloc[:<span class="hljs-number">313</span>] test = df.iloc[<span class="hljs-number">313</span>:] <span class="hljs-built_in">print</span>(<span class="hljs-string">"len(train)"</span>, <span class="hljs-built_in">len</span>(train)) <span class="hljs-built_in">print</span>(<span class="hljs-string">"len(test)"</span>, <span class="hljs-built_in">len</span>(test)) 

<span class="hljs-built_in">len</span>(df) <span class="hljs-number">325</span> <span class="hljs-built_in">len</span>(train) <span class="hljs-number">313</span> <span class="hljs-built_in">len</span>(test) <span class="hljs-number">12</span> 
</code></pre><h2 id="scale-datahttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2scale-data">Scale Data<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Scale-Data"></a></h2>
<p>In [9]:</p>
<pre><code><span class="hljs-keyword">from</span> sklearn.preprocessing <span class="hljs-keyword">import</span> MinMaxScaler scaler = MinMaxScaler() # IGNORE <span class="hljs-built_in">WARNING</span> ITS JUST CONVERTING <span class="hljs-keyword">TO</span> FLOATS # WE <span class="hljs-keyword">ONLY</span> FIT <span class="hljs-keyword">TO</span> TRAININ DATA, OTHERWISE WE ARE CHEATING ASSUMING <span class="hljs-keyword">INFO</span> ABOUT TEST <span class="hljs-keyword">SET</span> scaler.fit(train) 
</code></pre><p>Out[9]:</p>
<pre><code>MinMaxScaler(copy=<span class="hljs-keyword">True</span>, feature_range=(<span class="hljs-number">0</span>, <span class="hljs-number">1</span>))
</code></pre><p>In [10]:</p>
<pre><code>scaled_train = scaler.<span class="hljs-keyword">transform</span>(train) scaled_test = scaler.<span class="hljs-keyword">transform</span>(test) 
</code></pre><h2 id="time-series-generatorhttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2time-series-generator">Time Series Generator<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Time-Series-Generator"></a></h2>
<p>This class takes in a sequence of data-points gathered at equal intervals, along with time series parameters such as stride, length of history, etc., to produce batches for training/validation.</p>
<p>In [11]:</p>
<pre><code><span class="hljs-keyword">from</span> keras.preprocessing.<span class="hljs-keyword">sequence</span> <span class="hljs-keyword">import</span> TimeseriesGenerator scaled_train[<span class="hljs-number">0</span>] 

<span class="hljs-keyword">Using</span> TensorFlow backend. 
</code></pre><p>Out[11]:</p>
<pre><code><span class="hljs-attribute">array</span>([<span class="hljs-number">0</span>.<span class="hljs-number">03658432</span>])
</code></pre><p>In [12]:</p>
<pre><code># define generator n_input = <span class="hljs-number">2</span> n_features = <span class="hljs-number">1</span> generator = TimeseriesGenerator(scaled_train, scaled_train, length=n_input, batch_size=<span class="hljs-number">1</span>) <span class="hljs-built_in">print</span>(<span class="hljs-string">'len(scaled_train)'</span>,<span class="hljs-built_in">len</span>(scaled_train)) <span class="hljs-built_in">print</span>(<span class="hljs-string">'len(generator)'</span>,<span class="hljs-built_in">len</span>(generator)) # n_input = <span class="hljs-number">2</span> 

<span class="hljs-built_in">len</span>(scaled_train) <span class="hljs-number">313</span> <span class="hljs-built_in">len</span>(generator) <span class="hljs-number">311</span> 
</code></pre><p>In [13]:</p>
<pre><code><span class="hljs-comment"># What does the first batch look like? X,y = generator[0] print(f'Given the Array: \n{X.flatten()}') print(f'Predict this y: \n{y}') </span>

<span class="hljs-attr">Given the Array:</span> [<span class="hljs-number">0.03658432</span> <span class="hljs-number">0.03649885</span>] <span class="hljs-attr">Predict this y:</span> [[<span class="hljs-number">0.08299855</span>]] 
</code></pre><p>In [14]:</p>
<pre><code><span class="hljs-comment"># Let's redefine to get 12 months back and then predict the next month out n_input = 12 generator = TimeseriesGenerator(scaled_train, scaled_train, length=n_input, batch_size=1) # What does the first batch look like? X,y = generator[0] print(f'Given the Array: \n{X.flatten()}') print(f'Predict this y: \n{y}') </span>

<span class="hljs-attr">Given the Array:</span> [<span class="hljs-number">0.03658432</span> <span class="hljs-number">0.03649885</span> <span class="hljs-number">0.08299855</span> <span class="hljs-number">0.13103684</span> <span class="hljs-number">0.1017181</span> <span class="hljs-number">0.12804513</span> <span class="hljs-number">0.12266006</span> <span class="hljs-number">0.09453799</span> <span class="hljs-number">0.09359774</span> <span class="hljs-number">0.10496624</span> <span class="hljs-number">0.10334217</span> <span class="hljs-number">0.16283443</span>] <span class="hljs-attr">Predict this y:</span> [[<span class="hljs-number">0</span><span class="hljs-string">.</span>]] 
</code></pre><h2 id="create-the-modelhttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2create-the-model">Create the Model<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Create-the-Model"></a></h2>
<p>In [15]:</p>
<pre><code>from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM # define model model = Sequential() model.add(LSTM(100, activation='relu', input<span class="hljs-emphasis">_shape=(n_</span>input, n<span class="hljs-emphasis">_features))) model.add(Dense(1)) model.compile(optimizer='adam', loss='mse') model.summary() 

Model: "sequential_</span>1" <span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-emphasis">_ Layer (type) Output Shape Param # ================================================================= lstm_</span>1 (LSTM) (None, 100) 40800 <span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-emphasis">_ dense_</span>1 (Dense) (None, 1) 101 ================================================================= Total params: 40,901 Trainable params: 40,901 Non-trainable params: 0 <span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-emphasis">_ </span>
</code></pre><p>In [16]:</p>
<pre><code><span class="hljs-comment"># fit model model.fit_generator(generator,epochs=50) </span>

<span class="hljs-string">Epoch</span> <span class="hljs-number">1</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 7ms/step - loss:</span> <span class="hljs-number">0.0170</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">2</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0085</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">3</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0088</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">4</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0072</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">5</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0061</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">6</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0050</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">7</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0043</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">8</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0036</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">9</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0030</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">10</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0026</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">11</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0021</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">12</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0021</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">13</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0019</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">14</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0019</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">15</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0020</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">16</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0020</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">17</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0016</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">18</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0017</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">19</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0014</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">20</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0016</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">21</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0017</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">22</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0016</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">23</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0016</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">24</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0015</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">25</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0015</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">26</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0015</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">27</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0017</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">28</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0015</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">29</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0013</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">30</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0015</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">31</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0015</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">32</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0014</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">33</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0014</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">34</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0013</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">35</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0016</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">36</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 5ms/step - loss:</span> <span class="hljs-number">0.0016</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">37</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0014</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">38</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0014</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">39</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0014</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">40</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0015</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">41</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0013</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">42</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0015</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">43</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0013</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">44</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0013</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">45</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0012</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">46</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0012</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">47</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0012</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">48</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0014</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">49</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0013</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">50</span><span class="hljs-string">/50</span> <span class="hljs-number">301</span><span class="hljs-string">/301</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">2s 6ms/step - loss:</span> <span class="hljs-number">0.0013</span> 
</code></pre><p>Out[16]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">keras</span><span class="hljs-selector-class">.callbacks</span><span class="hljs-selector-class">.callbacks</span><span class="hljs-selector-class">.History</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7f706053e4a8</span>&gt;
</code></pre><p>In [17]:</p>
<pre><code>model.history.history.keys() loss_per_epoch = model.history.history[<span class="hljs-string">'loss'</span>] plt.plot(<span class="hljs-keyword">range</span>(<span class="hljs-built_in">len</span>(loss_per_epoch)),loss_per_epoch) 
</code></pre><p>Out[17]:</p>
<pre><code>[&lt;matplotlib.lines.Line2D at <span class="hljs-number">0x7f705863f7f0</span>&gt;]
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___22_1.png" alt /></p>
<h2 id="evaluate-on-test-datahttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2evaluate-on-test-data">Evaluate on Test Data<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Evaluate-on-Test-Data"></a></h2>
<p>In [18]:</p>
<pre><code><span class="hljs-attr">first_eval_batch</span> = scaled_train[-<span class="hljs-number">12</span>:] first_eval_batch 
</code></pre><p>Out[18]:</p>
<pre><code><span class="hljs-selector-tag">array</span>(<span class="hljs-selector-attr">[[0.63432772]</span>, <span class="hljs-selector-attr">[0.80776135]</span>, <span class="hljs-selector-attr">[0.72313873]</span>, <span class="hljs-selector-attr">[0.89870929]</span>, <span class="hljs-selector-attr">[1. ]</span>, <span class="hljs-selector-attr">[0.71672793]</span>, <span class="hljs-selector-attr">[0.88648602]</span>, <span class="hljs-selector-attr">[0.75869732]</span>, <span class="hljs-selector-attr">[0.82742115]</span>, <span class="hljs-selector-attr">[0.87443371]</span>, <span class="hljs-selector-attr">[0.96025301]</span>, <span class="hljs-selector-attr">[0.5584238 ]</span>])
</code></pre><p>In [19]:</p>
<pre><code><span class="hljs-attr">first_eval_batch</span> = first_eval_batch.reshape((<span class="hljs-number">1</span>, n_input, n_features)) model.predict(first_eval_batch) 
</code></pre><p>Out[19]:</p>
<pre><code><span class="hljs-attribute">array</span>([[<span class="hljs-number">0</span>.<span class="hljs-number">6874868</span>]], dtype=float<span class="hljs-number">32</span>)
</code></pre><p>In [20]:</p>
<pre><code><span class="hljs-selector-tag">scaled_test</span><span class="hljs-selector-attr">[0]</span> 
</code></pre><p>Out[20]:</p>
<pre><code><span class="hljs-attribute">array</span>([<span class="hljs-number">0</span>.<span class="hljs-number">63116506</span>])
</code></pre><p>In [21]:</p>
<pre><code>test_predictions = [] first_eval_batch = scaled_train[-n_input:] current_batch = first_eval_batch.reshape((<span class="hljs-number">1</span>, n_input, n_features)) <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(len(test)): # <span class="hljs-keyword">get</span> prediction <span class="hljs-number">1</span> <span class="hljs-type">time</span> stamp ahead ([<span class="hljs-number">0</span>] <span class="hljs-keyword">is</span> <span class="hljs-keyword">for</span> grabbing just the number <span class="hljs-keyword">instead</span> <span class="hljs-keyword">of</span> [<span class="hljs-keyword">array</span>]) current_pred = model.predict(current_batch)[<span class="hljs-number">0</span>] # store prediction test_predictions.append(current_pred) # <span class="hljs-keyword">update</span> batch <span class="hljs-keyword">to</span> now <span class="hljs-keyword">include</span> prediction <span class="hljs-keyword">and</span> <span class="hljs-keyword">drop</span> first <span class="hljs-keyword">value</span> current_batch = np.append(current_batch[:,<span class="hljs-number">1</span>:,:],[[current_pred]],axis=<span class="hljs-number">1</span>) test_predictions 
</code></pre><p>Out[21]:</p>
<pre><code>[array([<span class="hljs-number">0.6874868</span>], dtype=float32), array([<span class="hljs-number">0.8057986</span>], dtype=float32), array([<span class="hljs-number">0.7580665</span>], dtype=float32), array([<span class="hljs-number">0.923397</span>], dtype=float32), array([<span class="hljs-number">0.9962742</span>], dtype=float32), array([<span class="hljs-number">0.75279814</span>], dtype=float32), array([<span class="hljs-number">0.9047118</span>], dtype=float32), array([<span class="hljs-number">0.77618504</span>], dtype=float32), array([<span class="hljs-number">0.8549177</span>], dtype=float32), array([<span class="hljs-number">0.8928125</span>], dtype=float32), array([<span class="hljs-number">0.9670736</span>], dtype=float32), array([<span class="hljs-number">0.5747522</span>], dtype=float32)]
</code></pre><p>In [22]:</p>
<pre><code><span class="hljs-attribute">scaled_test</span> 
</code></pre><p>Out[22]:</p>
<pre><code><span class="hljs-selector-tag">array</span>(<span class="hljs-selector-attr">[[0.63116506]</span>, <span class="hljs-selector-attr">[0.82502778]</span>, <span class="hljs-selector-attr">[0.75972305]</span>, <span class="hljs-selector-attr">[0.94939738]</span>, <span class="hljs-selector-attr">[0.98743482]</span>, <span class="hljs-selector-attr">[0.82135225]</span>, <span class="hljs-selector-attr">[0.95956919]</span>, <span class="hljs-selector-attr">[0.80049577]</span>, <span class="hljs-selector-attr">[0.93025045]</span>, <span class="hljs-selector-attr">[0.95247457]</span>, <span class="hljs-selector-attr">[1.0661595 ]</span>, <span class="hljs-selector-attr">[0.65706471]</span>])
</code></pre><h2 id="inverse-transformations-and-comparehttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2inverse-transformations-and-compare">Inverse Transformations and Compare<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Inverse-Transformations-and-Compare"></a></h2>
<p>In [23]:</p>
<pre><code><span class="hljs-attr">true_predictions</span> = scaler.inverse_transform(test_predictions) <span class="hljs-literal">true</span>_predictions 
</code></pre><p>Out[23]:</p>
<pre><code><span class="hljs-selector-tag">array</span>(<span class="hljs-selector-attr">[[11073.90839344]</span>, <span class="hljs-selector-attr">[12458.03770655]</span>, <span class="hljs-selector-attr">[11899.6196956 ]</span>, <span class="hljs-selector-attr">[13833.82155687]</span>, <span class="hljs-selector-attr">[14686.41155297]</span>, <span class="hljs-selector-attr">[11837.98544043]</span>, <span class="hljs-selector-attr">[13615.22314852]</span>, <span class="hljs-selector-attr">[12111.58873272]</span>, <span class="hljs-selector-attr">[13032.68223149]</span>, <span class="hljs-selector-attr">[13476.01332593]</span>, <span class="hljs-selector-attr">[14344.79427296]</span>, <span class="hljs-selector-attr">[ 9755.02612317]</span>])
</code></pre><p>In [24]:</p>
<pre><code>test[<span class="hljs-string">'Predictions'</span>] = true_predictions test 

/opt/conda/lib/python3<span class="hljs-number">.6</span>/site-packages/ipykernel_launcher.py:<span class="hljs-number">1</span>: SettingWithCopyWarning: A <span class="hljs-keyword">value</span> <span class="hljs-keyword">is</span> trying <span class="hljs-keyword">to</span> be <span class="hljs-keyword">set</span> <span class="hljs-keyword">on</span> a <span class="hljs-keyword">copy</span> <span class="hljs-keyword">of</span> a <span class="hljs-keyword">slice</span> <span class="hljs-keyword">from</span> a DataFrame. Try <span class="hljs-keyword">using</span> .loc[row_indexer,col_indexer] = <span class="hljs-keyword">value</span> <span class="hljs-keyword">instead</span> See the caveats <span class="hljs-keyword">in</span> the documentation: [http://pandas.pydata.org/pandas-docs/<span class="hljs-keyword">stable</span>/user_guide/indexing.html#<span class="hljs-keyword">returning</span>-a-<span class="hljs-keyword">view</span>-versus-a-<span class="hljs-keyword">copy</span>](http://pandas.pydata.org/pandas-docs/<span class="hljs-keyword">stable</span>/user_guide/indexing.html#<span class="hljs-keyword">returning</span>-a-<span class="hljs-keyword">view</span>-versus-a-<span class="hljs-keyword">copy</span>) """Entry point for launching an IPython kernel. 
</code></pre><p>Out[24]:</p>
<p>SalesPredictions</p>
<p>DATE</p>
<p>2018-02-011041511073.908393</p>
<p>2018-03-011268312458.037707</p>
<p>2018-04-011191911899.619696</p>
<p>2018-05-011413813833.821557</p>
<p>2018-06-011458314686.411553</p>
<p>2018-07-011264011837.985440</p>
<p>2018-08-011425713615.223149</p>
<p>2018-09-011239612111.588733</p>
<p>2018-10-011391413032.682231</p>
<p>2018-11-011417413476.013326</p>
<p>2018-12-011550414344.794273</p>
<p>2019-01-01107189755.026123</p>
<p>In [25]:</p>
<pre><code><span class="hljs-attribute">test</span>.plot(figsize=(<span class="hljs-number">12</span>,<span class="hljs-number">8</span>)) 
</code></pre><p>Out[25]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.axes</span><span class="hljs-selector-class">._subplots</span><span class="hljs-selector-class">.AxesSubplot</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7f70603e2390</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___32_1.png" alt /></p>
<h2 id="saving-and-loading-modelshttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2saving-and-loading-models">Saving and Loading Models<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Saving-and-Loading-Models"></a></h2>
<p>In [26]:</p>
<pre><code>model.save(<span class="hljs-string">'my_rnn_model.h5'</span>) <span class="hljs-string">'''from keras.models import load_model new_model = load_model('my_rnn_model.h5')'''</span> 
</code></pre><p>Out[26]:</p>
<pre><code><span class="hljs-string">"from keras.models import load_model\nnew_model = load_model('my_rnn_model.h5')"</span>
</code></pre><p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#2.Multivariate-Time-Series-with-RNN"></a></p>
<p>Experimental data used to create regression models of appliances energy use in a low energy building. Data Set Information: The data set is at 10 min for about 4.5 months. The house temperature and humidity conditions were monitored with a ZigBee wireless sensor network. Each wireless node transmitted the temperature and humidity conditions around 3.3 min. Then, the wireless data was averaged for 10 minutes periods. The energy data was logged every 10 minutes with m-bus energy meters. Weather from the nearest airport weather station (Chievres Airport, Belgium) was downloaded from a public data set from Reliable Prognosis (rp5.ru), and merged together with the experimental data sets using the date and time column. Two random variables have been included in the data set for testing the regression models and to filter out non predictive attributes (parameters).</p>
<h2 id="datahttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2data">Data<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Data"></a></h2>
<p>Let's read in the data set:</p>
<p>In [27]:</p>
<pre><code><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np %matplotlib <span class="hljs-keyword">inline</span> <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt df = pd.read_csv(<span class="hljs-string">'../input/for-simple-exercises-time-series-forecasting/energydata_complete.csv'</span>,index_col=<span class="hljs-string">'date'</span>, infer_datetime_format=<span class="hljs-keyword">True</span>) df.head() 
</code></pre><p>Out[27]:</p>
<p>ApplianceslightsT1RH_1T2RH_2T3RH_3T4RH_4...T9RH_9T_outPress_mm_hgRH_outWindspeedVisibilityTdewpointrv1rv2</p>
<p>date</p>
<p>2016-01-11 17:00:00603019.8947.59666719.244.79000019.7944.73000019.00000045.566667...17.03333345.536.600000733.592.07.00000063.0000005.313.27543313.275433</p>
<p>2016-01-11 17:10:00603019.8946.69333319.244.72250019.7944.79000019.00000045.992500...17.06666745.566.483333733.692.06.66666759.1666675.218.60619518.606195</p>
<p>2016-01-11 17:20:00503019.8946.30000019.244.62666719.7944.93333318.92666745.890000...17.00000045.506.366667733.792.06.33333355.3333335.128.64266828.642668</p>
<p>2016-01-11 17:30:00504019.8946.06666719.244.59000019.7945.00000018.89000045.723333...17.00000045.406.250000733.892.06.00000051.5000005.045.41038945.410389</p>
<p>2016-01-11 17:40:00604019.8946.33333319.244.53000019.7945.00000018.89000045.530000...17.00000045.406.133333733.992.05.66666747.6666674.910.08409710.084097</p>
<p>5 rows × 28 columns</p>
<p>In [28]:</p>
<pre><code><span class="hljs-attribute">df</span>.info() 

<span class="hljs-section">&lt;class 'pandas.core.frame.DataFrame'&gt;</span> <span class="hljs-attribute">Index</span>: <span class="hljs-number">19735</span> entries, <span class="hljs-number">2016</span>-<span class="hljs-number">01</span>-<span class="hljs-number">11</span> <span class="hljs-number">17</span>:<span class="hljs-number">00</span>:<span class="hljs-number">00</span> to <span class="hljs-number">2016</span>-<span class="hljs-number">05</span>-<span class="hljs-number">27</span> <span class="hljs-number">18</span>:<span class="hljs-number">00</span>:<span class="hljs-number">00</span> Data columns (total <span class="hljs-number">28</span> columns): Appliances <span class="hljs-number">19735</span> non-null int<span class="hljs-number">64</span> lights <span class="hljs-number">19735</span> non-null int<span class="hljs-number">64</span> T<span class="hljs-number">1</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> RH_<span class="hljs-number">1</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> T<span class="hljs-number">2</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> RH_<span class="hljs-number">2</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> T<span class="hljs-number">3</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> RH_<span class="hljs-number">3</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> T<span class="hljs-number">4</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> RH_<span class="hljs-number">4</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> T<span class="hljs-number">5</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> RH_<span class="hljs-number">5</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> T<span class="hljs-number">6</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> RH_<span class="hljs-number">6</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> T<span class="hljs-number">7</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> RH_<span class="hljs-number">7</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> T<span class="hljs-number">8</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> RH_<span class="hljs-number">8</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> T<span class="hljs-number">9</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> RH_<span class="hljs-number">9</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> T_out <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> Press_mm_hg <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> RH_out <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> Windspeed <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> Visibility <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> Tdewpoint <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> rv<span class="hljs-number">1</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> rv<span class="hljs-number">2</span> <span class="hljs-number">19735</span> non-null float<span class="hljs-number">64</span> dtypes: float<span class="hljs-number">64</span>(<span class="hljs-number">26</span>), int<span class="hljs-number">64</span>(<span class="hljs-number">2</span>) memory usage: <span class="hljs-number">4</span>.<span class="hljs-number">4</span>+ MB 
</code></pre><p>In [29]:</p>
<pre><code><span class="hljs-selector-tag">df</span><span class="hljs-selector-attr">['Windspeed']</span><span class="hljs-selector-class">.plot</span>(figsize=(<span class="hljs-number">12</span>,<span class="hljs-number">8</span>)) 
</code></pre><p>Out[29]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.axes</span><span class="hljs-selector-class">._subplots</span><span class="hljs-selector-class">.AxesSubplot</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7f70583a95f8</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___40_1.png" alt /></p>
<p>In [30]:</p>
<pre><code><span class="hljs-selector-tag">df</span><span class="hljs-selector-attr">['Appliances']</span><span class="hljs-selector-class">.plot</span>(figsize=(<span class="hljs-number">12</span>,<span class="hljs-number">8</span>)) 
</code></pre><p>Out[30]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.axes</span><span class="hljs-selector-class">._subplots</span><span class="hljs-selector-class">.AxesSubplot</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7f705839f518</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___41_1.png" alt /></p>
<h2 id="train-test-splithttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2train-test-split">Train Test Split<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Train-Test-Split"></a></h2>
<p>In [31]:</p>
<pre><code><span class="hljs-attribute">df</span> = df.loc['<span class="hljs-number">2016</span>-<span class="hljs-number">05</span>-<span class="hljs-number">01</span>':] df = df.round(<span class="hljs-number">2</span>) print('len(df)',len(df)) test_days = <span class="hljs-number">2</span> test_ind = test_days*<span class="hljs-number">144</span> # <span class="hljs-number">24</span>*<span class="hljs-number">60</span>/<span class="hljs-number">10</span> = <span class="hljs-number">144</span> test_ind 

<span class="hljs-attribute">len</span>(df) <span class="hljs-number">3853</span> 
</code></pre><p>Out[31]:</p>
<pre><code><span class="hljs-number">288</span>
</code></pre><p>In [32]:</p>
<pre><code><span class="hljs-attr">train</span> = df.iloc[:-test_ind] test = df.iloc[-test_ind:] 
</code></pre><h2 id="scale-datahttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2scale-data">Scale Data<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Scale-Data"></a></h2>
<p>In [33]:</p>
<pre><code><span class="hljs-keyword">from</span> sklearn.preprocessing <span class="hljs-keyword">import</span> MinMaxScaler scaler = MinMaxScaler() # IGNORE <span class="hljs-built_in">WARNING</span> ITS JUST CONVERTING <span class="hljs-keyword">TO</span> FLOATS # WE <span class="hljs-keyword">ONLY</span> FIT <span class="hljs-keyword">TO</span> TRAININ DATA, OTHERWISE WE ARE CHEATING ASSUMING <span class="hljs-keyword">INFO</span> ABOUT TEST <span class="hljs-keyword">SET</span> scaler.fit(train) 
</code></pre><p>Out[33]:</p>
<pre><code>MinMaxScaler(copy=<span class="hljs-keyword">True</span>, feature_range=(<span class="hljs-number">0</span>, <span class="hljs-number">1</span>))
</code></pre><p>In [34]:</p>
<pre><code>scaled_train = scaler.<span class="hljs-keyword">transform</span>(train) scaled_test = scaler.<span class="hljs-keyword">transform</span>(test) 
</code></pre><h2 id="time-series-generatorhttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2time-series-generator">Time Series Generator<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Time-Series-Generator"></a></h2>
<p>This class takes in a sequence of data-points gathered at equal intervals, along with time series parameters such as stride, length of history, etc., to produce batches for training/validation.</p>
<p>In [35]:</p>
<pre><code><span class="hljs-keyword">from</span> tensorflow.keras.preprocessing.<span class="hljs-keyword">sequence</span> <span class="hljs-keyword">import</span> TimeseriesGenerator # define generator length = <span class="hljs-number">144</span> # Length <span class="hljs-keyword">of</span> the output <span class="hljs-keyword">sequences</span> (<span class="hljs-keyword">in</span> number <span class="hljs-keyword">of</span> timesteps) batch_size = <span class="hljs-number">1</span> #Number <span class="hljs-keyword">of</span> timeseries samples <span class="hljs-keyword">in</span> <span class="hljs-keyword">each</span> batch generator = TimeseriesGenerator(scaled_train, scaled_train, length=length, batch_size=batch_size) 
</code></pre><p>In [36]:</p>
<pre><code><span class="hljs-string">print('len(scaled_train)',len(scaled_train))</span> <span class="hljs-string">print('len(generator)</span> <span class="hljs-string">',len(generator)) X,y = generator[0] print(f'</span><span class="hljs-attr">Given the Array:</span> <span class="hljs-string">\n{X.flatten()}')</span> <span class="hljs-string">print(f'Predict</span> <span class="hljs-attr">this y:</span> <span class="hljs-string">\n{y}')</span> 

<span class="hljs-string">len(scaled_train)</span> <span class="hljs-number">3565 </span><span class="hljs-string">len(generator)</span> <span class="hljs-attr">3421 Given the Array:</span> [<span class="hljs-number">0.03896104</span> <span class="hljs-number">0</span><span class="hljs-string">.</span> <span class="hljs-number">0.13798978</span> <span class="hljs-string">...</span> <span class="hljs-number">0.14319527</span> <span class="hljs-number">0.75185111</span> <span class="hljs-number">0.75185111</span>] <span class="hljs-attr">Predict this y:</span> [[<span class="hljs-number">0.03896104</span> <span class="hljs-number">0</span><span class="hljs-string">.</span> <span class="hljs-number">0.30834753</span> <span class="hljs-number">0.29439421</span> <span class="hljs-number">0.16038492</span> <span class="hljs-number">0.49182278</span> <span class="hljs-number">0.0140056</span> <span class="hljs-number">0.36627907</span> <span class="hljs-number">0.24142857</span> <span class="hljs-number">0.24364791</span> <span class="hljs-number">0.12650602</span> <span class="hljs-number">0.36276002</span> <span class="hljs-number">0.12</span> <span class="hljs-number">0.28205572</span> <span class="hljs-number">0.06169297</span> <span class="hljs-number">0.15759185</span> <span class="hljs-number">0.34582624</span> <span class="hljs-number">0.39585974</span> <span class="hljs-number">0.09259259</span> <span class="hljs-number">0.39649608</span> <span class="hljs-number">0.18852459</span> <span class="hljs-number">0.96052632</span> <span class="hljs-number">0.59210526</span> <span class="hljs-number">0.1</span> <span class="hljs-number">0.58333333</span> <span class="hljs-number">0.13609467</span> <span class="hljs-number">0.4576746</span> <span class="hljs-number">0.4576746</span> ]] 
</code></pre><h2 id="create-the-modelhttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2create-the-model">Create the Model<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Create-the-Model"></a></h2>
<p>In [37]:</p>
<pre><code><span class="hljs-selector-tag">from</span> <span class="hljs-selector-tag">tensorflow</span><span class="hljs-selector-class">.keras</span><span class="hljs-selector-class">.models</span> <span class="hljs-selector-tag">import</span> <span class="hljs-selector-tag">Sequential</span> <span class="hljs-selector-tag">from</span> <span class="hljs-selector-tag">tensorflow</span><span class="hljs-selector-class">.keras</span><span class="hljs-selector-class">.layers</span> <span class="hljs-selector-tag">import</span> <span class="hljs-selector-tag">Dense</span>,<span class="hljs-selector-tag">LSTM</span> <span class="hljs-selector-tag">scaled_train</span><span class="hljs-selector-class">.shape</span> 
</code></pre><p>Out[37]:</p>
<pre><code><span class="hljs-string">(3565,</span> <span class="hljs-number">28</span><span class="hljs-string">)</span>
</code></pre><p>In [38]:</p>
<pre><code><span class="hljs-section"># define model model = Sequential() # Simple RNN layer model.add(LSTM(100,input<span class="hljs-emphasis">_shape=(length,scaled_</span>train.shape[1]))) # Final Prediction (one neuron per feature) model.add(Dense(scaled<span class="hljs-emphasis">_train.shape[1])) model.compile(optimizer='adam', loss='mse') model.summary() 

Model: "sequential" <span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span>_</span> Layer (type) Output Shape Param # ================================================================= lstm (LSTM) (None, 100) 51600 <span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-emphasis">_ dense (Dense) (None, 28) 2828 ================================================================= Total params: 54,428 Trainable params: 54,428 Non-trainable params: 0 <span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span><span class="hljs-strong">____</span>_</span> </span>
</code></pre><h2 id="earlystoppinghttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2earlystopping">EarlyStopping<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#EarlyStopping"></a></h2>
<p>In [39]:</p>
<pre><code><span class="hljs-string">from</span> <span class="hljs-string">tensorflow.keras.callbacks</span> <span class="hljs-string">import</span> <span class="hljs-string">EarlyStopping</span> <span class="hljs-string">early_stop</span> <span class="hljs-string">=</span> <span class="hljs-string">EarlyStopping(monitor='val_loss',patience=1)</span> <span class="hljs-string">validation_generator</span> <span class="hljs-string">=</span> <span class="hljs-string">TimeseriesGenerator(scaled_test,scaled_test,</span> <span class="hljs-string">length=length,</span> <span class="hljs-string">batch_size=batch_size)</span> <span class="hljs-string">model.fit_generator(generator,epochs=10,</span> <span class="hljs-string">validation_data=validation_generator,</span> <span class="hljs-string">callbacks=[early_stop])</span> 

<span class="hljs-string">Train</span> <span class="hljs-string">for</span> <span class="hljs-number">3421 </span><span class="hljs-string">steps,</span> <span class="hljs-string">validate</span> <span class="hljs-string">for</span> <span class="hljs-number">144</span> <span class="hljs-string">steps</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">1</span><span class="hljs-string">/10</span> <span class="hljs-number">3421</span><span class="hljs-string">/3421</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">182s 53ms/step - loss: 0.0114 - val_loss:</span> <span class="hljs-number">0.0102</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">2</span><span class="hljs-string">/10</span> <span class="hljs-number">3421</span><span class="hljs-string">/3421</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">180s 52ms/step - loss: 0.0079 - val_loss:</span> <span class="hljs-number">0.0086</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">3</span><span class="hljs-string">/10</span> <span class="hljs-number">3421</span><span class="hljs-string">/3421</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">181s 53ms/step - loss: 0.0075 - val_loss:</span> <span class="hljs-number">0.0084</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">4</span><span class="hljs-string">/10</span> <span class="hljs-number">2295</span><span class="hljs-string">/3421</span> [<span class="hljs-string">===================&gt;..........</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">ETA: 58s - loss:</span> <span class="hljs-number">0.0073</span>
</code></pre><p>In [40]:</p>
<pre><code>model.history.history.keys() losses = pd.DataFrame(model.history.history) losses.plot() 
</code></pre><p>Out[40]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.axes</span><span class="hljs-selector-class">._subplots</span><span class="hljs-selector-class">.AxesSubplot</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7f701f602fd0</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___56_1.png" alt /></p>
<h2 id="evaluate-on-test-datahttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2evaluate-on-test-data">Evaluate on Test Data<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Evaluate-on-Test-Data"></a></h2>
<p>In [41]:</p>
<pre><code><span class="hljs-attr">first_eval_batch</span> = scaled_train[-length:] first_eval_batch 
</code></pre><p>Out[41]:</p>
<pre><code><span class="hljs-string">array([[0.1038961</span> <span class="hljs-string">,</span> <span class="hljs-number">0</span><span class="hljs-string">.</span> <span class="hljs-string">,</span> <span class="hljs-number">0.72231687</span><span class="hljs-string">,</span> <span class="hljs-string">...,</span> <span class="hljs-number">0.53550296</span><span class="hljs-string">,</span> <span class="hljs-number">0.15909546</span><span class="hljs-string">,</span> <span class="hljs-number">0.15909546</span><span class="hljs-string">],</span> [<span class="hljs-number">0.11688312</span>, <span class="hljs-number">0</span><span class="hljs-string">.</span> , <span class="hljs-number">0.73424191</span>, <span class="hljs-string">...</span>, <span class="hljs-number">0.52662722</span>, <span class="hljs-number">0.40344207</span>, <span class="hljs-number">0.40344207</span>]<span class="hljs-string">,</span> [<span class="hljs-number">0.11688312</span>, <span class="hljs-number">0</span><span class="hljs-string">.</span> , <span class="hljs-number">0.73424191</span>, <span class="hljs-string">...</span>, <span class="hljs-number">0.51775148</span>, <span class="hljs-number">0.20452271</span>, <span class="hljs-number">0.20452271</span>]<span class="hljs-string">,</span> <span class="hljs-string">...,</span> [<span class="hljs-number">0.18181818</span>, <span class="hljs-number">0</span><span class="hljs-string">.</span> , <span class="hljs-number">0.70017036</span>, <span class="hljs-string">...</span>, <span class="hljs-number">0.50118343</span>, <span class="hljs-number">0.33340004</span>, <span class="hljs-number">0.33340004</span>]<span class="hljs-string">,</span> [<span class="hljs-number">0.09090909</span>, <span class="hljs-number">0</span><span class="hljs-string">.</span> , <span class="hljs-number">0.70017036</span>, <span class="hljs-string">...</span>, <span class="hljs-number">0.51952663</span>, <span class="hljs-number">0.78747248</span>, <span class="hljs-number">0.78747248</span>]<span class="hljs-string">,</span> [<span class="hljs-number">0.1038961</span> , <span class="hljs-number">0</span><span class="hljs-string">.</span> , <span class="hljs-number">0.70017036</span>, <span class="hljs-string">...</span>, <span class="hljs-number">0.53846154</span>, <span class="hljs-number">0.77286372</span>, <span class="hljs-number">0.77286372</span>]<span class="hljs-string">])</span>
</code></pre><p>In [42]:</p>
<pre><code><span class="hljs-attr">first_eval_batch</span> = first_eval_batch.reshape((<span class="hljs-number">1</span>, length, scaled_train.shape[<span class="hljs-number">1</span>])) model.predict(first_eval_batch) 
</code></pre><p>Out[42]:</p>
<pre><code><span class="hljs-attribute">array</span>([[ <span class="hljs-number">0</span>.<span class="hljs-number">10138211</span>, <span class="hljs-number">0</span>.<span class="hljs-number">06747055</span>, <span class="hljs-number">0</span>.<span class="hljs-number">7054</span> , <span class="hljs-number">0</span>.<span class="hljs-number">39806256</span>, <span class="hljs-number">0</span>.<span class="hljs-number">54101586</span>, <span class="hljs-number">0</span>.<span class="hljs-number">43319184</span>, <span class="hljs-number">0</span>.<span class="hljs-number">4200446</span> , <span class="hljs-number">0</span>.<span class="hljs-number">4243666</span> , <span class="hljs-number">0</span>.<span class="hljs-number">7039989</span> , <span class="hljs-number">0</span>.<span class="hljs-number">40865916</span>, <span class="hljs-number">0</span>.<span class="hljs-number">30799067</span>, <span class="hljs-number">0</span>.<span class="hljs-number">36109492</span>, <span class="hljs-number">0</span>.<span class="hljs-number">6687389</span> , -<span class="hljs-number">0</span>.<span class="hljs-number">00205898</span>, <span class="hljs-number">0</span>.<span class="hljs-number">6135764</span> , <span class="hljs-number">0</span>.<span class="hljs-number">42317435</span>, <span class="hljs-number">0</span>.<span class="hljs-number">5408545</span> , <span class="hljs-number">0</span>.<span class="hljs-number">31506443</span>, <span class="hljs-number">0</span>.<span class="hljs-number">49856254</span>, <span class="hljs-number">0</span>.<span class="hljs-number">3375144</span> , <span class="hljs-number">0</span>.<span class="hljs-number">595571</span> , <span class="hljs-number">0</span>.<span class="hljs-number">53723574</span>, <span class="hljs-number">0</span>.<span class="hljs-number">4301601</span> , <span class="hljs-number">0</span>.<span class="hljs-number">2183933</span> , <span class="hljs-number">0</span>.<span class="hljs-number">5878991</span> , <span class="hljs-number">0</span>.<span class="hljs-number">5141734</span> , <span class="hljs-number">0</span>.<span class="hljs-number">5099164</span> , <span class="hljs-number">0</span>.<span class="hljs-number">5046277</span> ]], dtype=float<span class="hljs-number">32</span>)
</code></pre><p>In [43]:</p>
<pre><code><span class="hljs-selector-tag">scaled_test</span><span class="hljs-selector-attr">[0]</span> 
</code></pre><p>Out[43]:</p>
<pre><code><span class="hljs-attribute">array</span>([<span class="hljs-number">0</span>.<span class="hljs-number">19480519</span>, <span class="hljs-number">0</span>. , <span class="hljs-number">0</span>.<span class="hljs-number">70017036</span>, <span class="hljs-number">0</span>.<span class="hljs-number">3920434</span> , <span class="hljs-number">0</span>.<span class="hljs-number">53007217</span>, <span class="hljs-number">0</span>.<span class="hljs-number">41064526</span>, <span class="hljs-number">0</span>.<span class="hljs-number">40616246</span>, <span class="hljs-number">0</span>.<span class="hljs-number">41913319</span>, <span class="hljs-number">0</span>.<span class="hljs-number">72714286</span>, <span class="hljs-number">0</span>.<span class="hljs-number">4115245</span> , <span class="hljs-number">0</span>.<span class="hljs-number">30722892</span>, <span class="hljs-number">0</span>.<span class="hljs-number">36445121</span>, <span class="hljs-number">0</span>.<span class="hljs-number">66777778</span>, <span class="hljs-number">0</span>. , <span class="hljs-number">0</span>.<span class="hljs-number">61119082</span>, <span class="hljs-number">0</span>.<span class="hljs-number">39840637</span>, <span class="hljs-number">0</span>.<span class="hljs-number">51618399</span>, <span class="hljs-number">0</span>.<span class="hljs-number">32953105</span>, <span class="hljs-number">0</span>.<span class="hljs-number">53703704</span>, <span class="hljs-number">0</span>.<span class="hljs-number">34024896</span>, <span class="hljs-number">0</span>.<span class="hljs-number">6057377</span> , <span class="hljs-number">0</span>.<span class="hljs-number">52631579</span>, <span class="hljs-number">0</span>.<span class="hljs-number">41881579</span>, <span class="hljs-number">0</span>.<span class="hljs-number">2</span> , <span class="hljs-number">0</span>.<span class="hljs-number">55283333</span>, <span class="hljs-number">0</span>.<span class="hljs-number">53372781</span>, <span class="hljs-number">0</span>.<span class="hljs-number">76305783</span>, <span class="hljs-number">0</span>.<span class="hljs-number">76305783</span>])
</code></pre><p>In [44]:</p>
<pre><code>n_features = scaled_train.shape[<span class="hljs-number">1</span>] test_predictions = [] first_eval_batch = scaled_train[-length:] current_batch = first_eval_batch.reshape((<span class="hljs-number">1</span>, length, n_features)) <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(len(test)): # <span class="hljs-keyword">get</span> prediction <span class="hljs-number">1</span> <span class="hljs-type">time</span> stamp ahead ([<span class="hljs-number">0</span>] <span class="hljs-keyword">is</span> <span class="hljs-keyword">for</span> grabbing just the number <span class="hljs-keyword">instead</span> <span class="hljs-keyword">of</span> [<span class="hljs-keyword">array</span>]) current_pred = model.predict(current_batch)[<span class="hljs-number">0</span>] # store prediction test_predictions.append(current_pred) # <span class="hljs-keyword">update</span> batch <span class="hljs-keyword">to</span> now <span class="hljs-keyword">include</span> prediction <span class="hljs-keyword">and</span> <span class="hljs-keyword">drop</span> first <span class="hljs-keyword">value</span> current_batch = np.append(current_batch[:,<span class="hljs-number">1</span>:,:],[[current_pred]],axis=<span class="hljs-number">1</span>) 
</code></pre><h2 id="inverse-transformations-and-comparehttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2inverse-transformations-and-compare">Inverse Transformations and Compare<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Inverse-Transformations-and-Compare"></a></h2>
<p>In [45]:</p>
<pre><code><span class="hljs-attr">true_predictions</span> = scaler.inverse_transform(test_predictions) <span class="hljs-literal">true</span>_predictions = pd.DataFrame(data=<span class="hljs-literal">true</span>_predictions,columns=test.columns) <span class="hljs-literal">true</span>_predictions 
</code></pre><p>Out[45]:</p>
<p>ApplianceslightsT1RH_1T2RH_2T3RH_3T4RH_4...T9RH_9T_outPress_mm_hgRH_outWindspeedVisibilityTdewpointrv1rv2</p>
<p>098.0642282.02411724.53069838.02643024.13646835.02824125.09911836.79901624.12799237.726848...21.79223837.17068716.231932756.34897556.6921682.18393340.2739457.28953025.49052425.226246</p>
<p>196.7347423.03212724.56049638.27405724.12363435.22240425.04828236.75549324.05867037.757009...21.62252237.03192815.869359756.84979856.7758432.40795340.4719357.04008025.58606325.403467</p>
<p>2101.3171283.65487724.58193338.45278424.10727435.39524624.98594536.73257924.00212037.867709...21.46573836.95288515.582083757.27725756.8520142.60464540.1515366.77541825.65402225.435035</p>
<p>3106.2275944.05048924.59547238.62635724.09213535.53003924.92815936.74073523.95754638.006228...21.32441436.89626315.331191757.60079756.7831162.78454739.6247516.53359825.68174625.425436</p>
<p>4110.2745624.31411124.60993138.81520324.08394135.67649824.88018536.79405223.92657638.167812...21.19696736.87120415.117654757.86148956.6735632.95541339.0301696.32080525.69815725.427384</p>
<p>..................................................................</p>
<p>283-484.899577-4.31702629.40432028.52185540.0727374.64098224.97573849.08468729.04299835.858090...22.88950035.01526661.924760746.902323-55.8979162.31187867.965786-5.10675428.44246035.991550</p>
<p>284-484.899210-4.31705729.40432928.52184540.0727444.64098024.97575449.08473029.04300135.858127...22.88950235.01528261.924766746.902294-55.8980882.31189367.965593-5.10682328.44246635.991547</p>
<p>285-484.899210-4.31708429.40433628.52182340.0727564.64095624.97577149.08476629.04300935.858155...22.88950335.01529161.924771746.902270-55.8982152.31190667.965450-5.10688728.44247835.991550</p>
<p>286-484.899119-4.31711129.40434528.52180240.0727724.64094424.97579149.08479629.04301435.858183...22.88950435.01529761.924806746.902253-55.8983512.31191067.965364-5.10693828.44250135.991532</p>
<p>287-484.899164-4.31713429.40435128.52177440.0727874.64091424.97581049.08483429.04302035.858207...22.88950435.01529761.924824746.902227-55.8984682.31191267.965279-5.10698228.44250735.991526</p>
<p>288 rows × 28 columns</p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#3.Use-Facebook's-Prophet-Library-for-forecasting"></a></p>
<p>In [46]:</p>
<pre><code><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd <span class="hljs-keyword">from</span> fbprophet <span class="hljs-keyword">import</span> Prophet 
</code></pre><h2 id="load-datahttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2load-data">Load Data<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Load-Data"></a></h2>
<p>The input to Prophet is always a dataframe with two columns: ds and y. The ds (datestamp) column should be of a format expected by Pandas, ideally YYYY-MM-DD for a date or YYYY-MM-DD HH:MM:SS for a timestamp. The y column must be numeric, and represents the measurement we wish to forecast.</p>
<p>In [47]:</p>
<pre><code><span class="hljs-attr">df</span> = pd.read_csv(<span class="hljs-string">'../input/for-simple-exercises-time-series-forecasting/Miles_Traveled.csv'</span>) df.head() 
</code></pre><p>Out[47]:</p>
<p>DATETRFVOLUSM227NFWA</p>
<p>01970-01-0180173.0</p>
<p>11970-02-0177442.0</p>
<p>21970-03-0190223.0</p>
<p>31970-04-0189956.0</p>
<p>41970-05-0197972.0</p>
<p>In [48]:</p>
<pre><code>df.columns = ['ds','y'] df['ds'] = pd.to<span class="hljs-emphasis">_datetime(df['ds']) df.info() 

<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">class</span> '<span class="hljs-attr">pandas.core.frame.DataFrame</span>'&gt;</span></span> RangeIndex: 588 entries, 0 to 587 Data columns (total 2 columns): ds 588 non-null datetime64[<span class="hljs-string">ns</span>] y 588 non-null float64 dtypes: datetime64[<span class="hljs-string">ns</span>](<span class="hljs-link">1</span>), float64(1) memory usage: 9.3 KB </span>
</code></pre><p>In [49]:</p>
<pre><code>pd.plotting.register_matplotlib_converters() <span class="hljs-keyword">try</span>: df.plot(x=<span class="hljs-symbol">'ds</span>',y=<span class="hljs-string">'y'</span>,figsize=(<span class="hljs-number">18</span>,<span class="hljs-number">6</span>)) except TypeError <span class="hljs-keyword">as</span> e: figure_or_exception = <span class="hljs-built_in">str</span>(<span class="hljs-string">"TypeError: "</span> + <span class="hljs-built_in">str</span>(e)) <span class="hljs-keyword">else</span>: figure_or_exception = df.set_index(<span class="hljs-symbol">'ds</span>').y.plot().get_figure() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___69_0.png" alt /></p>
<p>In [50]:</p>
<pre><code><span class="hljs-built_in">print</span>(<span class="hljs-string">'len(df)'</span>,<span class="hljs-built_in">len</span>(df)) <span class="hljs-built_in">print</span>(<span class="hljs-string">'len(df) - 12 = '</span>,<span class="hljs-built_in">len</span>(df) - <span class="hljs-number">12</span>) 

<span class="hljs-built_in">len</span>(df) <span class="hljs-number">588</span> <span class="hljs-built_in">len</span>(df) - <span class="hljs-number">12</span> = <span class="hljs-number">576</span> 
</code></pre><p>In [51]:</p>
<pre><code><span class="hljs-attr">train</span> = df.iloc[:<span class="hljs-number">576</span>] test = df.iloc[<span class="hljs-number">576</span>:] 
</code></pre><h2 id="create-and-fit-modelhttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2create-and-fit-model">Create and Fit Model<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Create-and-Fit-Model"></a></h2>
<p>In [52]:</p>
<pre><code># This <span class="hljs-keyword">is</span> fitting <span class="hljs-keyword">on</span> <span class="hljs-keyword">all</span> the data (<span class="hljs-keyword">no</span> train test split <span class="hljs-keyword">in</span> this example) m = Prophet() m.fit(train) 
</code></pre><p>Out[52]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">fbprophet</span><span class="hljs-selector-class">.forecaster</span><span class="hljs-selector-class">.Prophet</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7f70730c06a0</span>&gt;
</code></pre><h2 id="forecastinghttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2forecasting">Forecasting<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Forecasting"></a></h2>
<p><strong>NOTE: Prophet by default is for daily data. You need to pass a frequency for sub-daily or monthly data. Info: <a target="_blank" href="https://facebook.github.io/prophet/docs/non-daily_data.html">https://facebook.github.io/prophet/docs/non-daily_data.html</a></strong></p>
<p>In [53]:</p>
<pre><code><span class="hljs-attr">future</span> = m.make_future_dataframe(periods=<span class="hljs-number">12</span>,freq=<span class="hljs-string">'MS'</span>) forecast = m.predict(future) 
</code></pre><p>In [54]:</p>
<pre><code><span class="hljs-selector-tag">forecast</span><span class="hljs-selector-class">.tail</span>() 
</code></pre><p>Out[54]:</p>
<p>dstrendyhat_loweryhat_uppertrend_lowertrend_upperadditive_termsadditive_terms_loweradditive_terms_upperyearlyyearly_loweryearly_uppermultiplicative_termsmultiplicative_terms_lowermultiplicative_terms_upperyhat</p>
<p>5832018-08-01263410.800604274535.269790285644.123755263342.030655263476.98157516448.01304916448.01304916448.01304916448.01304916448.01304916448.0130490.00.00.0279858.813654</p>
<p>5842018-09-01263552.915940256177.190765267761.256896263449.458400263643.024845-1670.418537-1670.418537-1670.418537-1670.418537-1670.418537-1670.4185370.00.00.0261882.497404</p>
<p>5852018-10-01263690.446911263051.398780274886.109947263541.695987263825.1358335305.5058735305.5058735305.5058735305.5058735305.5058735305.5058730.00.00.0268995.952784</p>
<p>5862018-11-01263832.562247249806.070474261028.788020263640.580642264000.846531-8208.986942-8208.986942-8208.986942-8208.986942-8208.986942-8208.9869420.00.00.0255623.575305</p>
<p>5872018-12-01263970.093217251087.538081262731.050771263724.773757264186.916157-6922.716937-6922.716937-6922.716937-6922.716937-6922.716937-6922.7169370.00.00.0257047.376280</p>
<p>In [55]:</p>
<pre><code><span class="hljs-selector-tag">test</span><span class="hljs-selector-class">.tail</span>() 
</code></pre><p>Out[55]:</p>
<p>dsy</p>
<p>5832018-08-01286608.0</p>
<p>5842018-09-01260595.0</p>
<p>5852018-10-01282174.0</p>
<p>5862018-11-01258590.0</p>
<p>5872018-12-01268413.0</p>
<p>In [56]:</p>
<pre><code>forecast.<span class="hljs-keyword">columns</span> 
</code></pre><p>Out[56]:</p>
<pre><code><span class="hljs-keyword">Index</span>([<span class="hljs-string">'ds'</span>, <span class="hljs-string">'trend'</span>, <span class="hljs-string">'yhat_lower'</span>, <span class="hljs-string">'yhat_upper'</span>, <span class="hljs-string">'trend_lower'</span>, <span class="hljs-string">'trend_upper'</span>, <span class="hljs-string">'additive_terms'</span>, <span class="hljs-string">'additive_terms_lower'</span>, <span class="hljs-string">'additive_terms_upper'</span>, <span class="hljs-string">'yearly'</span>, <span class="hljs-string">'yearly_lower'</span>, <span class="hljs-string">'yearly_upper'</span>, <span class="hljs-string">'multiplicative_terms'</span>, <span class="hljs-string">'multiplicative_terms_lower'</span>, <span class="hljs-string">'multiplicative_terms_upper'</span>, <span class="hljs-string">'yhat'</span>], dtype=<span class="hljs-string">'object'</span>)
</code></pre><p>In [57]:</p>
<pre><code><span class="hljs-selector-tag">forecast</span><span class="hljs-selector-attr">[[<span class="hljs-string">'ds'</span>, <span class="hljs-string">'yhat'</span>, <span class="hljs-string">'yhat_lower'</span>, <span class="hljs-string">'yhat_upper'</span>]</span>]<span class="hljs-selector-class">.tail</span>(12) 
</code></pre><p>Out[57]:</p>
<p>dsyhatyhat_loweryhat_upper</p>
<p>5762018-01-01243850.453937238143.777398249480.190740</p>
<p>5772018-02-01235480.588794229702.624041241029.888771</p>
<p>5782018-03-01262683.274392256372.318521268163.016848</p>
<p>5792018-04-01262886.236399257227.047581269018.659587</p>
<p>5802018-05-01272609.522601266952.781615278452.756472</p>
<p>5812018-06-01272862.615300267443.492047278588.217647</p>
<p>5822018-07-01279321.841101273416.105839284843.281259</p>
<p>5832018-08-01279858.813654274535.269790285644.123755</p>
<p>5842018-09-01261882.497404256177.190765267761.256896</p>
<p>5852018-10-01268995.952784263051.398780274886.109947</p>
<p>5862018-11-01255623.575305249806.070474261028.788020</p>
<p>5872018-12-01257047.376280251087.538081262731.050771</p>
<h3 id="plotting-forecasthttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2plotting-forecast">Plotting Forecast<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Plotting-Forecast"></a></h3>
<p>We can use Prophet's own built in plotting tools</p>
<p>In [58]:</p>
<pre><code><span class="hljs-selector-tag">m</span><span class="hljs-selector-class">.plot</span>(forecast); 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___81_0.png" alt /></p>
<p>In [59]:</p>
<pre><code><span class="hljs-attribute">import</span> matplotlib.pyplot as plt %matplotlib inline m.plot(forecast) plt.xlim(pd.to_datetime('<span class="hljs-number">2003</span>-<span class="hljs-number">01</span>-<span class="hljs-number">01</span>'),pd.to_datetime('<span class="hljs-number">2007</span>-<span class="hljs-number">01</span>-<span class="hljs-number">01</span>')) 
</code></pre><p>Out[59]:</p>
<pre><code>(731216<span class="hljs-selector-class">.0</span>, 732677<span class="hljs-selector-class">.0</span>)
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___82_1.png" alt /></p>
<p>In [60]:</p>
<pre><code><span class="hljs-selector-tag">m</span><span class="hljs-selector-class">.plot_components</span>(forecast); 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___83_0.png" alt /></p>
<p>In [61]:</p>
<pre><code>from statsmodels.tools.eval<span class="hljs-emphasis">_measures import rmse predictions = forecast.iloc[<span class="hljs-string">-12:</span>][<span class="hljs-symbol">'yhat'</span>] predictions </span>
</code></pre><p>Out[61]:</p>
<pre><code><span class="hljs-attribute">576</span> <span class="hljs-number">243850</span>.<span class="hljs-number">453937</span> <span class="hljs-number">577</span> <span class="hljs-number">235480</span>.<span class="hljs-number">588794</span> <span class="hljs-number">578</span> <span class="hljs-number">262683</span>.<span class="hljs-number">274392</span> <span class="hljs-number">579</span> <span class="hljs-number">262886</span>.<span class="hljs-number">236399</span> <span class="hljs-number">580</span> <span class="hljs-number">272609</span>.<span class="hljs-number">522601</span> <span class="hljs-number">581</span> <span class="hljs-number">272862</span>.<span class="hljs-number">615300</span> <span class="hljs-number">582</span> <span class="hljs-number">279321</span>.<span class="hljs-number">841101</span> <span class="hljs-number">583</span> <span class="hljs-number">279858</span>.<span class="hljs-number">813654</span> <span class="hljs-number">584</span> <span class="hljs-number">261882</span>.<span class="hljs-number">497404</span> <span class="hljs-number">585</span> <span class="hljs-number">268995</span>.<span class="hljs-number">952784</span> <span class="hljs-number">586</span> <span class="hljs-number">255623</span>.<span class="hljs-number">575305</span> <span class="hljs-number">587</span> <span class="hljs-number">257047</span>.<span class="hljs-number">376280</span> Name: yhat, dtype: float<span class="hljs-number">64</span>
</code></pre><p>In [62]:</p>
<pre><code><span class="hljs-built_in">test</span>[<span class="hljs-string">'y'</span>] 
</code></pre><p>Out[62]:</p>
<pre><code><span class="hljs-attribute">576</span> <span class="hljs-number">245695</span>.<span class="hljs-number">0</span> <span class="hljs-number">577</span> <span class="hljs-number">226660</span>.<span class="hljs-number">0</span> <span class="hljs-number">578</span> <span class="hljs-number">268480</span>.<span class="hljs-number">0</span> <span class="hljs-number">579</span> <span class="hljs-number">272475</span>.<span class="hljs-number">0</span> <span class="hljs-number">580</span> <span class="hljs-number">286164</span>.<span class="hljs-number">0</span> <span class="hljs-number">581</span> <span class="hljs-number">280877</span>.<span class="hljs-number">0</span> <span class="hljs-number">582</span> <span class="hljs-number">288145</span>.<span class="hljs-number">0</span> <span class="hljs-number">583</span> <span class="hljs-number">286608</span>.<span class="hljs-number">0</span> <span class="hljs-number">584</span> <span class="hljs-number">260595</span>.<span class="hljs-number">0</span> <span class="hljs-number">585</span> <span class="hljs-number">282174</span>.<span class="hljs-number">0</span> <span class="hljs-number">586</span> <span class="hljs-number">258590</span>.<span class="hljs-number">0</span> <span class="hljs-number">587</span> <span class="hljs-number">268413</span>.<span class="hljs-number">0</span> Name: y, dtype: float<span class="hljs-number">64</span>
</code></pre><p>In [63]:</p>
<pre><code><span class="hljs-selector-tag">rmse</span>(predictions,test[<span class="hljs-string">'y'</span>]) 
</code></pre><p>Out[63]:</p>
<pre><code>8618<span class="hljs-selector-class">.783155559411</span>
</code></pre><p>In [64]:</p>
<pre><code><span class="hljs-selector-tag">test</span><span class="hljs-selector-class">.mean</span>() 
</code></pre><p>Out[64]:</p>
<pre><code><span class="hljs-attribute">y</span> <span class="hljs-number">268739</span>.<span class="hljs-number">666667</span> dtype: float<span class="hljs-number">64</span>
</code></pre><h2 id="prophet-diagnosticshttpswwwkagglecombulentsiyahtime-series-forecasting-and-analysis-part-2prophet-diagnostics">Prophet Diagnostics<a target="_blank" href="https://www.kaggle.com/bulentsiyah/time-series-forecasting-and-analysis-part-2/#Prophet-Diagnostics"></a></h2>
<p>Prophet includes functionality for time series cross validation to measure forecast error using historical data. This is done by selecting cutoff points in the history, and for each of them fitting the model using data only up to that cutoff point. We can then compare the forecasted values to the actual values.</p>
<p>In [65]:</p>
<pre><code><span class="hljs-keyword">from</span> fbprophet.<span class="hljs-keyword">diagnostics</span> <span class="hljs-keyword">import</span> cross_validation,performance_metrics <span class="hljs-keyword">from</span> fbprophet.plot <span class="hljs-keyword">import</span> plot_cross_validation_metric len(df) len(df)/<span class="hljs-number">12</span> # Initial <span class="hljs-number">5</span> years training period initial = <span class="hljs-number">5</span> * <span class="hljs-number">365</span> initial = str(initial) + <span class="hljs-string">' days'</span> # Fold every <span class="hljs-number">5</span> years period = <span class="hljs-number">5</span> * <span class="hljs-number">365</span> period = str(period) + <span class="hljs-string">' days'</span> # Forecast <span class="hljs-number">1</span> year <span class="hljs-keyword">into</span> the future horizon = <span class="hljs-number">365</span> horizon = str(horizon) + <span class="hljs-string">' days'</span> df_cv = cross_validation(m, initial=initial, period=period, horizon = horizon) df_cv.head() 
</code></pre><p>Out[65]:</p>
<p>dsyhatyhat_loweryhat_upperycutoff</p>
<p>01977-01-01108479.087306107041.710385109884.311419102445.01976-12-11</p>
<p>11977-02-01102996.111502101502.260980104430.450535102416.01976-12-11</p>
<p>21977-03-01118973.317944117486.531273120346.494171119960.01976-12-11</p>
<p>31977-04-01120612.923539119090.079896122015.351195121513.01976-12-11</p>
<p>41977-05-01127883.031663126371.269290129257.086719128884.01976-12-11</p>
<p>In [66]:</p>
<pre><code><span class="hljs-selector-tag">df_cv</span><span class="hljs-selector-class">.tail</span>() 
</code></pre><p>Out[66]:</p>
<p>dsyhatyhat_loweryhat_upperycutoff</p>
<p>1032017-08-01273614.230765268044.449825279627.753972283184.02016-12-01</p>
<p>1042017-09-01255737.189562249987.360798261551.567277262673.02016-12-01</p>
<p>1052017-10-01262845.616157257365.064845268981.082903278937.02016-12-01</p>
<p>1062017-11-01249500.895087244208.004549255508.082651257712.02016-12-01</p>
<p>1072017-12-01250750.668713244667.910999256110.605457266535.02016-12-01</p>
<p>In [67]:</p>
<pre><code><span class="hljs-selector-tag">performance_metrics</span>(df_cv) 
</code></pre><p>Out[67]:</p>
<p>horizonmsermsemaemapemdapecoverage</p>
<p>052 days2.402227e+074901.2518924506.3843710.0276310.0235930.4</p>
<p>153 days2.150811e+074637.6834074238.6627320.0248630.0235930.4</p>
<p>254 days1.807689e+074251.6925353708.9432750.0199330.0222780.5</p>
<p>355 days2.298205e+074793.9601544236.2752440.0230420.0235930.4</p>
<p>457 days2.078937e+074559.5357843972.0872700.0213170.0222780.5</p>
<p>........................</p>
<p>94360 days1.814608e+074259.8215153750.3594830.0195960.0195650.5</p>
<p>95361 days1.726110e+074154.6475363473.0373390.0182120.0189570.5</p>
<p>96362 days3.173990e+075633.8175084404.3007290.0220340.0247930.4</p>
<p>97364 days2.986513e+075464.9000404229.8698600.0213780.0216290.5</p>
<p>98365 days5.443147e+077377.7683775621.7078030.0265240.0247930.4</p>
<p>99 rows × 7 columns</p>
<p>In [68]:</p>
<pre><code><span class="hljs-selector-tag">plot_cross_validation_metric</span>(df_cv, metric=<span class="hljs-string">'rmse'</span>); 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___92_0.png" alt /></p>
<p>In [69]:</p>
<pre><code><span class="hljs-selector-tag">plot_cross_validation_metric</span>(df_cv, metric=<span class="hljs-string">'mape'</span>); 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/33406342/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nbGzvutVCsrIxnly2nIyvg.DMkdh133Ww15ufatpKLuS_kFO6KdrUgMUxq9TRI-hMVtsPKFL5LlXPFLiYn5DEw3r7JSd-cbKfEWe66Sqj-2vTxupwZRtKWaVW6sofj2UPNNptW5lNO67_AkkrcCUbwx8YCq_3Gv-X_Y-KoUG08rxc2_wxYYs2mWihTrS9Yd7C8XHYeAbUWDK7xEKrc20--u8yBExSNCiA2vXegVhfeRPHfBfRaLgGn558gdLW5U4KeWM00mM_ukEth0Glwi3EmTDDiJ42a0yG6-aBeDJOBdX4pfGSCBDtpqlCGO5uYW2KG-l3WvhJw_p9olp7Awrnto7cld8re2_FFKLT7VzHmDvSewvkYoyrWX5f_f0KPi3ZtcLP6z-0QsqrubeWYFYDQsplMKi2PUyRR57nFVSuUfh9X01cBYbCE7FBFY8a7Sob4EErKr7rDg3aktdNNN7I_ekCHCQF7eN3OkqqGqve1_DfDsOGUOnT_hwvps9Bo7KLiNwrJy4ud6UbLyW-Icohmt1BHl2WRkJAfkfwNLGhPeVmQ0mN7BPPfVP6hTiBlnbaf7dgiwbbMy9o2050-ceADTJWmxKjkEbsi-7SB_zbpdc5XfXD9Bi0oH6Grb71cFxGOYzH41HM20phixjAERq6iNQCZ4PTclIeySSJscMANGCMHS2g5Z-sPz5YOxTN3sXC4.axN8ODMyWnNUZwNbkoANdQ/__results___files/__results___93_0.png" alt /></p>
]]></content:encoded></item><item><title><![CDATA[Deep Learning-based Semantic Segmentation | Keras]]></title><description><![CDATA[About This Kernel

What is the purpose of the study?

I am working on Deep Learning and Computer Vision in Flying Automobile Project. The project I am working on are Semantic segmentation (Aerial images) during the flight of the vehicle to find suita...]]></description><link>https://www.bulentsiyah.com/deep-learning-based-semantic-segmentation-keras</link><guid isPermaLink="true">https://www.bulentsiyah.com/deep-learning-based-semantic-segmentation-keras</guid><category><![CDATA[Deep Learning]]></category><category><![CDATA[Computer Vision]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Mon, 30 Mar 2020 12:42:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1611492300021/4ZXt-uDaD.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="about-this-kernelhttpswwwkagglecombulentsiyahdeep-learning-based-semantic-segmentation-kerasabout-this-kernel"><strong>About This Kernel</strong><a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#About-This-Kernel"></a></h1>
<ul>
<li>What is the purpose of the study?</li>
</ul>
<p>I am working on Deep Learning and Computer Vision in Flying Automobile Project. The project I am working on are Semantic segmentation (Aerial images) during the flight of the vehicle to find suitable areas where the vehicle can land. To make volumetric control of the vehicle to these areas.</p>
<p>With this kernel, I have completed working on the <strong>Semantic segmentation</strong></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#Content"></a></p>
<ol>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#1.">What is semantic segmentation</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#2.">Implementation of Segnet, FCN, UNet , PSPNet and other models in Keras</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#3.">I extracted Github codes</a></li>
</ol>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#1.What-is-semantic-segmentation"></a></p>
<p>Source: <a target="_blank" href="https://divamgupta.com/image-segmentation/2019/06/06/deep-learning-semantic-segmentation-keras.html">https://divamgupta.com/image-segmentation/2019/06/06/deep-learning-semantic-segmentation-keras.html</a></p>
<p>Semantic image segmentation is the task of classifying each pixel in an image from a predefined set of classes. In the following example, different entities are classified.</p>
<p><img src="https://divamgupta.com/assets/images/posts/imgseg/image15.png?style=centerme" alt="Semantic segmentation of a bedroom image" /></p>
<p>In the above example, the pixels belonging to the bed are classified in the class "bed", the pixels corresponding to the walls are labeled as "wall", etc.</p>
<p>In particular, our goal is to take an image of size W x H x 3 and generate a W x H matrix containing the predicted class ID's corresponding to all the pixels.</p>
<p><img src="https://divamgupta.com/assets/images/posts/imgseg/image14.png?style=centerme" alt="Image source: jeremyjordan.me" /></p>
<p>Usually, in an image with various entities, we want to know which pixel belongs to which entity, For example in an outdoor image, we can segment the sky, ground, trees, people, etc.</p>
<p>Semantic segmentation is different from object detection as it does not predict any bounding boxes around the objects. We do not distinguish between different instances of the same object. For example, there could be multiple cars in the scene and all of them would have the same label.</p>
<p><img src="https://divamgupta.com/assets/images/posts/imgseg/image7.png?style=centerme" alt="An example where there are multiple instances of the same object class" /></p>
<p>In order to perform semantic segmentation, a higher level understanding of the image is required. The algorithm should figure out the objects present and also the pixels which correspond to the object. Semantic segmentation is one of the essential tasks for complete scene understanding.</p>
<h2 id="datasethttpswwwkagglecombulentsiyahdeep-learning-based-semantic-segmentation-kerasdataset">Dataset<a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#Dataset"></a></h2>
<p>The first step in training our segmentation model is to prepare the dataset. We would need the input RGB images and the corresponding segmentation images. If you want to make your own dataset, a tool like labelme or GIMP can be used to manually generate the ground truth segmentation masks.</p>
<p>Assign each class a unique ID. In the segmentation images, the pixel value should denote the class ID of the corresponding pixel. This is a common format used by most of the datasets and keras_segmentation. For the segmentation maps, do not use the jpg format as jpg is lossy and the pixel values might change. Use bmp or png format instead. And of course, the size of the input image and the segmentation image should be the same.</p>
<p>In the following example, pixel (0,0) is labeled as class 2, pixel (3,4) is labeled as class 1 and rest of the pixels are labeled as class 0.</p>
<p>In [1]:</p>
<pre><code><span class="hljs-attribute">import</span> cv<span class="hljs-number">2</span> import numpy as np ann_img = np.zeros((<span class="hljs-number">30</span>,<span class="hljs-number">30</span>,<span class="hljs-number">3</span>)).astype('uint<span class="hljs-number">8</span>') ann_img[ <span class="hljs-number">3</span> , <span class="hljs-number">4</span> ] = <span class="hljs-number">1</span> # this would set the label of pixel <span class="hljs-number">3</span>,<span class="hljs-number">4</span> as <span class="hljs-number">1</span> ann_img[ <span class="hljs-number">0</span> , <span class="hljs-number">0</span> ] = <span class="hljs-number">2</span> # this would set the label of pixel <span class="hljs-number">0</span>,<span class="hljs-number">0</span> as <span class="hljs-number">2</span> 
</code></pre><p>After generating the segmentation images, place them in the training/testing folder. Make separate folders for input images and the segmentation images. The file name of the input image and the corresponding segmentation image should be the same. For this tutorial we would be using a data-set which is already prepared. You can download it from here (<a target="_blank" href="https://www.kaggle.com/bulentsiyah/semantic-drone-dataset">Aerial Semantic Segmentation Drone Dataset</a>).</p>
<h2 id="aerial-semantic-segmentation-drone-datasethttpswwwkagglecombulentsiyahsemantic-drone-datasethttpswwwkagglecombulentsiyahdeep-learning-based-semantic-segmentation-kerasaerial-semantic-segmentation-drone-dataset"><a target="_blank" href="https://www.kaggle.com/bulentsiyah/semantic-drone-dataset">Aerial Semantic Segmentation Drone Dataset</a><a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#Aerial-Semantic-Segmentation-Drone-Dataset"></a></h2>
<p>In [2]:</p>
<pre><code><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt %matplotlib <span class="hljs-keyword">inline</span> original_image = "/kaggle/input/semantic-drone-dataset/dataset/semantic_drone_dataset/original_images/001.jpg" label_image_semantic = "/kaggle/input/semantic-drone-dataset/dataset/semantic_drone_dataset/label_images_semantic/001.png" fig, axs = plt.subplots(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, figsize=(<span class="hljs-number">16</span>, <span class="hljs-number">8</span>), constrained_layout=<span class="hljs-keyword">True</span>) axs[<span class="hljs-number">0</span>].imshow( Image.<span class="hljs-keyword">open</span>(original_image)) axs[<span class="hljs-number">0</span>].grid(<span class="hljs-keyword">False</span>) label_image_semantic = Image.<span class="hljs-keyword">open</span>(label_image_semantic) label_image_semantic = np.asarray(label_image_semantic) axs[<span class="hljs-number">1</span>].imshow(label_image_semantic) axs[<span class="hljs-number">1</span>].grid(<span class="hljs-keyword">False</span>) 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/51582896/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..bqi5Tjvdtm0QdQ70hMO4CQ.haFgy_CazNh-8gPIU_0qWWssdWYZzajukHSya7EYGU045j8VmYSAuAaB2n6cTLUZKKtzR4m0rVENKUXjL2UkY5oBR1zr11xSbmzks5_q5gUXUNYi2IRK0Ju-QQqMFTn8Fd-kQe_YVDYsfxW80NNCtP3G1G0YfTjuVR9Js2HKN-gxZGURgJ5lUbEAq_oiOGWDJPHSSTZcWyM9yrvadG9gLcEMQH4a_MemUqR7NE3kFkseFs8VTxDuIYC-NnR1v5lB2qVOWipTFAq0G46lxii8LLMhorBSy-Uw9XfnPTS9OgluB6X7X2x6ok2kHu8qMUpE7xf0cwpXjcur6iWx_7tBHW11hD4TuXSGrO2ABe89EKwgAeQg0EU47q9ih45CAfhuF1cpiU_m7sPLJZ6he6QqeVsDhUEJ25_pjqDn-ha8_U3dFJMqCz9kI8Wp7r3ikpdxygRuDxpzkbVtM3_Twb4Gr_beRUo4EYBkXmyIydwAFJmjlvGRHAOrm3n45bAKjTrbSiDaPG7WM1CqxKp7_D0tnJK7ZzXzacaXGsSXaHfrD5Slb9qdAHVw-6GYEJeJA9ZGwFzSpehFcsKr90VWeIGkhG2qySYYp2jewlbIwFGQ-acJP1MsMsCQZwzS5M9DKklhVVTMhCqSoQlfw9mO-r0P0aV39qoz5HWLA3jVmJfaZM_Oe_aaUTnEqC2Wyth3yLBM.MPjTO42Y6h4ppCKEmUg0JA/__results___files/__results___5_0.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#2.Implementation-of-Segnet,-FCN,-UNet-,-PSPNet-and-other-models-in-Keras"></a></p>
<p>Source Github Link: <a target="_blank" href="https://github.com/divamgupta/image-segmentation-keras">https://github.com/divamgupta/image-segmentation-keras</a></p>
<p>Output</p>
<p>In [3]:</p>
<pre><code><span class="hljs-addition">!pip install keras-segmentation </span>
</code></pre><h3 id="trainhttpswwwkagglecombulentsiyahdeep-learning-based-semantic-segmentation-kerastrain">Train<a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#Train"></a></h3>
<p>In [4]:</p>
<pre><code><span class="hljs-string">kaggle_commit</span> <span class="hljs-string">=</span> <span class="hljs-literal">True</span> <span class="hljs-string">epochs</span> <span class="hljs-string">=</span> <span class="hljs-attr">20 if kaggle_commit:</span> <span class="hljs-string">epochs</span> <span class="hljs-string">=</span> <span class="hljs-number">5</span> 
</code></pre><p>In [5]:</p>
<pre><code><span class="hljs-keyword">from</span> keras_segmentation.models.unet <span class="hljs-keyword">import</span> vgg_unet n_classes = <span class="hljs-number">23</span> # Aerial Semantic Segmentation Drone Dataset tree, gras, other vegetation, dirt, gravel, rocks, water, paved area, pool, person, dog, car, bicycle, roof, wall, fence, fence-pole, <span class="hljs-keyword">window</span>, door, obstacle model = vgg_unet(n_classes=n_classes , input_height=<span class="hljs-number">416</span>, input_width=<span class="hljs-number">608</span> ) model.train( train_images = "/kaggle/input/semantic-drone-dataset/dataset/semantic_drone_dataset/original_images/", train_annotations = "/kaggle/input/semantic-drone-dataset/dataset/semantic_drone_dataset/label_images_semantic/", checkpoints_path = "vgg_unet" , epochs=epochs )
</code></pre><h3 id="trainhttpswwwkagglecombulentsiyahdeep-learning-based-semantic-segmentation-kerastrain">Train<a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#Train"></a></h3>
<p>In [4]:</p>
<pre><code><span class="hljs-string">kaggle_commit</span> <span class="hljs-string">=</span> <span class="hljs-literal">True</span> <span class="hljs-string">epochs</span> <span class="hljs-string">=</span> <span class="hljs-attr">20 if kaggle_commit:</span> <span class="hljs-string">epochs</span> <span class="hljs-string">=</span> <span class="hljs-number">5</span> 
</code></pre><p>In [5]:</p>
<pre><code><span class="hljs-string">from</span> <span class="hljs-string">keras_segmentation.models.unet</span> <span class="hljs-string">import</span> <span class="hljs-string">vgg_unet</span> <span class="hljs-string">n_classes</span> <span class="hljs-string">=</span> <span class="hljs-number">23</span> <span class="hljs-comment"># Aerial Semantic Segmentation Drone Dataset tree, gras, other vegetation, dirt, gravel, rocks, water, paved area, pool, person, dog, car, bicycle, roof, wall, fence, fence-pole, window, door, obstacle model = vgg_unet(n_classes=n_classes , input_height=416, input_width=608 ) model.train( train_images = "/kaggle/input/semantic-drone-dataset/dataset/semantic_drone_dataset/original_images/", train_annotations = "/kaggle/input/semantic-drone-dataset/dataset/semantic_drone_dataset/label_images_semantic/", checkpoints_path = "vgg_unet" , epochs=epochs ) </span>

<span class="hljs-string">Using</span> <span class="hljs-string">TensorFlow</span> <span class="hljs-string">backend.</span> 

<span class="hljs-string">Downloading</span> <span class="hljs-string">data</span> <span class="hljs-string">from</span> <span class="hljs-string">https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5</span> <span class="hljs-number">58892288</span><span class="hljs-string">/58889256</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-string">5s</span> <span class="hljs-string">0us/step</span> 

 <span class="hljs-number">0</span><span class="hljs-string">%|</span> <span class="hljs-string">|</span> <span class="hljs-number">0</span><span class="hljs-string">/400</span> [<span class="hljs-number">00</span><span class="hljs-string">:00&lt;?</span>, <span class="hljs-string">?it/s</span>]

<span class="hljs-string">Verifying</span> <span class="hljs-string">training</span> <span class="hljs-string">dataset</span> 

<span class="hljs-number">100</span><span class="hljs-string">%|██████████|</span> <span class="hljs-number">400</span><span class="hljs-string">/400</span> [<span class="hljs-number">05</span><span class="hljs-string">:23&lt;00:00</span>, <span class="hljs-number">1.</span><span class="hljs-string">24it/s</span>] 

<span class="hljs-string">Dataset</span> <span class="hljs-string">verified!</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">1</span><span class="hljs-string">/5</span> <span class="hljs-number">512</span><span class="hljs-string">/512</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">693s 1s/step - loss: 1.4858 - accuracy:</span> <span class="hljs-number">0.5910</span> <span class="hljs-string">saved</span> <span class="hljs-string">vgg_unet.0</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">2</span><span class="hljs-string">/5</span> <span class="hljs-number">512</span><span class="hljs-string">/512</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">690s 1s/step - loss: 1.1745 - accuracy:</span> <span class="hljs-number">0.6474</span> <span class="hljs-string">saved</span> <span class="hljs-string">vgg_unet.1</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">3</span><span class="hljs-string">/5</span> <span class="hljs-number">512</span><span class="hljs-string">/512</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">689s 1s/step - loss: 1.0604 - accuracy:</span> <span class="hljs-number">0.6776</span> <span class="hljs-string">saved</span> <span class="hljs-string">vgg_unet.2</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">4</span><span class="hljs-string">/5</span> <span class="hljs-number">512</span><span class="hljs-string">/512</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">692s 1s/step - loss: 0.9800 - accuracy:</span> <span class="hljs-number">0.7042</span> <span class="hljs-string">saved</span> <span class="hljs-string">vgg_unet.3</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">5</span><span class="hljs-string">/5</span> <span class="hljs-number">512</span><span class="hljs-string">/512</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">692s 1s/step - loss: 0.9144 - accuracy:</span> <span class="hljs-number">0.7254</span> <span class="hljs-string">saved</span> <span class="hljs-string">vgg_unet.4</span> 
</code></pre><h3 id="predictionhttpswwwkagglecombulentsiyahdeep-learning-based-semantic-segmentation-kerasprediction">Prediction<a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#Prediction"></a></h3>
<p>In [6]:</p>
<pre><code><span class="hljs-keyword">import</span> <span class="hljs-type">time</span> <span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt %matplotlib <span class="hljs-keyword">inline</span> start = <span class="hljs-type">time</span>.time() input_image = "/kaggle/input/semantic-drone-dataset/dataset/semantic_drone_dataset/original_images/001.jpg" <span class="hljs-keyword">out</span> = model.predict_segmentation( inp=input_image, out_fname="out.png" ) fig, axs = plt.subplots(<span class="hljs-number">1</span>, <span class="hljs-number">3</span>, figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>), constrained_layout=<span class="hljs-keyword">True</span>) img_orig = Image.<span class="hljs-keyword">open</span>(input_image) axs[<span class="hljs-number">0</span>].imshow(img_orig) axs[<span class="hljs-number">0</span>].set_title(<span class="hljs-string">'original image-001.jpg'</span>) axs[<span class="hljs-number">0</span>].grid(<span class="hljs-keyword">False</span>) axs[<span class="hljs-number">1</span>].imshow(<span class="hljs-keyword">out</span>) axs[<span class="hljs-number">1</span>].set_title(<span class="hljs-string">'prediction image-out.png'</span>) axs[<span class="hljs-number">1</span>].grid(<span class="hljs-keyword">False</span>) validation_image = "/kaggle/input/semantic-drone-dataset/dataset/semantic_drone_dataset/label_images_semantic/001.png" axs[<span class="hljs-number">2</span>].imshow( Image.<span class="hljs-keyword">open</span>(validation_image)) axs[<span class="hljs-number">2</span>].set_title(<span class="hljs-string">'true label image-001.png'</span>) axs[<span class="hljs-number">2</span>].grid(<span class="hljs-keyword">False</span>) done = <span class="hljs-type">time</span>.time() elapsed = done - <span class="hljs-keyword">start</span> 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/51582896/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..bqi5Tjvdtm0QdQ70hMO4CQ.haFgy_CazNh-8gPIU_0qWWssdWYZzajukHSya7EYGU045j8VmYSAuAaB2n6cTLUZKKtzR4m0rVENKUXjL2UkY5oBR1zr11xSbmzks5_q5gUXUNYi2IRK0Ju-QQqMFTn8Fd-kQe_YVDYsfxW80NNCtP3G1G0YfTjuVR9Js2HKN-gxZGURgJ5lUbEAq_oiOGWDJPHSSTZcWyM9yrvadG9gLcEMQH4a_MemUqR7NE3kFkseFs8VTxDuIYC-NnR1v5lB2qVOWipTFAq0G46lxii8LLMhorBSy-Uw9XfnPTS9OgluB6X7X2x6ok2kHu8qMUpE7xf0cwpXjcur6iWx_7tBHW11hD4TuXSGrO2ABe89EKwgAeQg0EU47q9ih45CAfhuF1cpiU_m7sPLJZ6he6QqeVsDhUEJ25_pjqDn-ha8_U3dFJMqCz9kI8Wp7r3ikpdxygRuDxpzkbVtM3_Twb4Gr_beRUo4EYBkXmyIydwAFJmjlvGRHAOrm3n45bAKjTrbSiDaPG7WM1CqxKp7_D0tnJK7ZzXzacaXGsSXaHfrD5Slb9qdAHVw-6GYEJeJA9ZGwFzSpehFcsKr90VWeIGkhG2qySYYp2jewlbIwFGQ-acJP1MsMsCQZwzS5M9DKklhVVTMhCqSoQlfw9mO-r0P0aV39qoz5HWLA3jVmJfaZM_Oe_aaUTnEqC2Wyth3yLBM.MPjTO42Y6h4ppCKEmUg0JA/__results___files/__results___12_0.png" alt /></p>
<p>In [7]:</p>
<pre><code><span class="hljs-string">print(elapsed)</span> <span class="hljs-string">print(out)</span> <span class="hljs-string">print(out.shape)</span> 

<span class="hljs-number">3.0578877925872803</span> [[<span class="hljs-number">0</span> <span class="hljs-number">1</span> <span class="hljs-number">0</span> <span class="hljs-string">...</span> <span class="hljs-number">0</span> <span class="hljs-number">3</span> <span class="hljs-number">0</span>] [<span class="hljs-number">0</span> <span class="hljs-number">1</span> <span class="hljs-number">1</span> <span class="hljs-string">...</span> <span class="hljs-number">3</span> <span class="hljs-number">3</span> <span class="hljs-number">3</span>] [<span class="hljs-number">0</span> <span class="hljs-number">1</span> <span class="hljs-number">1</span> <span class="hljs-string">...</span> <span class="hljs-number">3</span> <span class="hljs-number">3</span> <span class="hljs-number">3</span>] <span class="hljs-string">...</span> [<span class="hljs-number">0</span> <span class="hljs-number">1</span> <span class="hljs-number">1</span> <span class="hljs-string">...</span> <span class="hljs-number">3</span> <span class="hljs-number">3</span> <span class="hljs-number">3</span>] [<span class="hljs-number">0</span> <span class="hljs-number">1</span> <span class="hljs-number">1</span> <span class="hljs-string">...</span> <span class="hljs-number">3</span> <span class="hljs-number">3</span> <span class="hljs-number">3</span>] [<span class="hljs-number">0</span> <span class="hljs-number">1</span> <span class="hljs-number">1</span> <span class="hljs-string">...</span> <span class="hljs-number">3</span> <span class="hljs-number">3</span> <span class="hljs-number">3</span>]] <span class="hljs-string">(208,</span> <span class="hljs-number">304</span><span class="hljs-string">)</span> 
</code></pre><p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#3.-I-extracted-Github-codes"></a></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#2.">Implementation of Segnet, FCN, UNet , PSPNet and other models in Keras</a> the codes in this section do everything for you. You have no chance to interfere with the codes. I extracted these codes and wrote them open and open. We will have the chance to trade on the model as we wish.</p>
<p>In [8]:</p>
<pre><code><span class="hljs-keyword">import</span> os os.environ[<span class="hljs-string">'TF_CPP_MIN_LOG_LEVEL'</span>] = <span class="hljs-string">'3'</span> <span class="hljs-keyword">import</span> keras <span class="hljs-keyword">from</span> keras.models <span class="hljs-keyword">import</span> * <span class="hljs-keyword">from</span> keras.layers <span class="hljs-keyword">import</span> * <span class="hljs-keyword">from</span> <span class="hljs-keyword">types</span> <span class="hljs-keyword">import</span> MethodType <span class="hljs-keyword">import</span> random <span class="hljs-keyword">import</span> six <span class="hljs-keyword">import</span> <span class="hljs-type">json</span> <span class="hljs-keyword">from</span> tqdm <span class="hljs-keyword">import</span> tqdm <span class="hljs-keyword">import</span> cv2 <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-keyword">import</span> itertools 
</code></pre><p>In [9]:</p>
<pre><code><span class="hljs-attribute">import</span> sys print(sys.version) 

<span class="hljs-attribute">3</span>.<span class="hljs-number">6</span>.<span class="hljs-number">6</span> |Anaconda, Inc.| (default, Oct <span class="hljs-number">9</span> <span class="hljs-number">2018</span>, <span class="hljs-number">12</span>:<span class="hljs-number">34</span>:<span class="hljs-number">16</span>)<span class="hljs-meta"> [GCC 7.3.0] </span>
</code></pre><p>In [10]:</p>
<pre><code>IMAGE_ORDERING_CHANNELS_FIRST = "channels_first" IMAGE_ORDERING_CHANNELS_LAST = "channels_last" # <span class="hljs-keyword">Default</span> IMAGE_ORDERING = channels_last IMAGE_ORDERING = IMAGE_ORDERING_CHANNELS_LAST <span class="hljs-keyword">if</span> IMAGE_ORDERING == <span class="hljs-string">'channels_first'</span>: MERGE_AXIS = <span class="hljs-number">1</span> elif IMAGE_ORDERING == <span class="hljs-string">'channels_last'</span>: MERGE_AXIS = <span class="hljs-number">-1</span> <span class="hljs-keyword">if</span> IMAGE_ORDERING == <span class="hljs-string">'channels_first'</span>: pretrained_url = "https://github.com/fchollet/deep-learning-models/" \ "releases/download/v0.1/" \ "vgg16_weights_th_dim_ordering_th_kernels_notop.h5" elif IMAGE_ORDERING == <span class="hljs-string">'channels_last'</span>: pretrained_url = "https://github.com/fchollet/deep-learning-models/" \ "releases/download/v0.1/" \ "vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5" class_colors = [(random.randint(<span class="hljs-number">0</span>, <span class="hljs-number">255</span>), random.randint( <span class="hljs-number">0</span>, <span class="hljs-number">255</span>), random.randint(<span class="hljs-number">0</span>, <span class="hljs-number">255</span>)) <span class="hljs-keyword">for</span> _ <span class="hljs-keyword">in</span> range(<span class="hljs-number">5000</span>)] 
</code></pre><p>In [11]:</p>
<pre><code>def get<span class="hljs-emphasis">_colored_</span>segmentation<span class="hljs-emphasis">_image( seg_</span>arr , n<span class="hljs-emphasis">_classes , colors=class_</span>colors ): output<span class="hljs-emphasis">_height = seg_</span>arr.shape[<span class="hljs-string">0</span>] output<span class="hljs-emphasis">_width = seg_</span>arr.shape[<span class="hljs-string">1</span>] seg<span class="hljs-emphasis">_img = np.zeros((output_</span>height, output<span class="hljs-emphasis">_width, 3)) for c in range(n_</span>classes): seg<span class="hljs-emphasis">_img[<span class="hljs-string">:, :, 0</span>] += ((seg_</span>arr[<span class="hljs-string">:, :</span>] == c)<span class="hljs-emphasis">*(colors[<span class="hljs-string">c</span>][<span class="hljs-symbol">0</span>])).astype('uint8') seg_img[<span class="hljs-string">:, :, 1</span>] += ((seg_arr[<span class="hljs-string">:, :</span>] == c)*</span>(colors[<span class="hljs-string">c</span>][<span class="hljs-symbol">1</span>])).astype('uint8') seg<span class="hljs-emphasis">_img[<span class="hljs-string">:, :, 2</span>] += ((seg_</span>arr[<span class="hljs-string">:, :</span>] == c)<span class="hljs-emphasis">*(colors[<span class="hljs-string">c</span>][<span class="hljs-symbol">2</span>])).astype('uint8') return seg_img </span>
</code></pre><p>In [12]:</p>
<pre><code>def visualize_segmentation( seg_arr , inp_img=<span class="hljs-keyword">None</span> , n_classes=<span class="hljs-keyword">None</span> , colors=class_colors , class_names=<span class="hljs-keyword">None</span> , overlay_img=<span class="hljs-keyword">False</span> , show_legends=<span class="hljs-keyword">False</span> , prediction_width=<span class="hljs-keyword">None</span> , prediction_height=<span class="hljs-keyword">None</span> ): <span class="hljs-keyword">if</span> n_classes <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span>: n_classes = np.max(seg_arr) seg_img = get_colored_segmentation_image( seg_arr , n_classes , colors=colors ) <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> inp_img <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span>: orininal_h = inp_img.shape[<span class="hljs-number">0</span>] orininal_w = inp_img.shape[<span class="hljs-number">1</span>] seg_img = cv2.resize(seg_img, (orininal_w, orininal_h)) <span class="hljs-keyword">if</span> (<span class="hljs-keyword">not</span> prediction_height <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span>) <span class="hljs-keyword">and</span> (<span class="hljs-keyword">not</span> prediction_width <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span>): seg_img = cv2.resize(seg_img, (prediction_width, prediction_height )) <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> inp_img <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span>: inp_img = cv2.resize(inp_img, (prediction_width, prediction_height )) <span class="hljs-keyword">if</span> overlay_img: <span class="hljs-keyword">assert</span> <span class="hljs-keyword">not</span> inp_img <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span> seg_img = overlay_seg_image( inp_img , seg_img ) <span class="hljs-keyword">if</span> show_legends: <span class="hljs-keyword">assert</span> <span class="hljs-keyword">not</span> class_names <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span> legend_img = get_legends(class_names , colors=colors ) seg_img = concat_lenends( seg_img , legend_img ) <span class="hljs-keyword">return</span> seg_img 
</code></pre><p>In [13]:</p>
<pre><code><span class="hljs-attribute">def</span> get_image_array(image_input, width, height, imgNorm=<span class="hljs-string">"sub_mean"</span>, ordering='channels_first'): <span class="hljs-string">""</span><span class="hljs-string">" Load image array from input "</span><span class="hljs-string">""</span> if type(image_input) is np.ndarray: # It is already an array, use it as it is img = image_input elif isinstance(image_input, six.string_types) : if not os.path.isfile(image_input): raise DataLoaderError(<span class="hljs-string">"get_image_array: path {0} doesn't exist"</span>.format(image_input)) img = cv<span class="hljs-number">2</span>.imread(image_input, <span class="hljs-number">1</span>) else: raise DataLoaderError(<span class="hljs-string">"get_image_array: Can't process input type {0}"</span>.format(str(type(image_input)))) if imgNorm == <span class="hljs-string">"sub_and_divide"</span>: img = np.float<span class="hljs-number">32</span>(cv<span class="hljs-number">2</span>.resize(img, (width, height))) / <span class="hljs-number">127</span>.<span class="hljs-number">5</span> - <span class="hljs-number">1</span> elif imgNorm == <span class="hljs-string">"sub_mean"</span>: img = cv<span class="hljs-number">2</span>.resize(img, (width, height)) img = img.astype(np.float<span class="hljs-number">32</span>) img[:, :, <span class="hljs-number">0</span>] -= <span class="hljs-number">103</span>.<span class="hljs-number">939</span> img[:, :, <span class="hljs-number">1</span>] -= <span class="hljs-number">116</span>.<span class="hljs-number">779</span> img[:, :, <span class="hljs-number">2</span>] -= <span class="hljs-number">123</span>.<span class="hljs-number">68</span> img = img[:, :, ::-<span class="hljs-number">1</span>] elif imgNorm == <span class="hljs-string">"divide"</span>: img = cv<span class="hljs-number">2</span>.resize(img, (width, height)) img = img.astype(np.float<span class="hljs-number">32</span>) img = img/<span class="hljs-number">255</span>.<span class="hljs-number">0</span> if ordering == 'channels_first': img = np.rollaxis(img, <span class="hljs-number">2</span>, <span class="hljs-number">0</span>) return img def get_image_arr( path , width , height , imgNorm=<span class="hljs-string">"sub_mean"</span> , odering='channels_first' ):     if type( path ) is np.ndarray:         img = path     else:         img = cv<span class="hljs-number">2</span>.imread(path, <span class="hljs-number">1</span>)     if imgNorm == <span class="hljs-string">"sub_and_divide"</span>:         img = np.float<span class="hljs-number">32</span>(cv<span class="hljs-number">2</span>.resize(img, ( width , height ))) / <span class="hljs-number">127</span>.<span class="hljs-number">5</span> - <span class="hljs-number">1</span>     elif imgNorm == <span class="hljs-string">"sub_mean"</span>:         img = cv<span class="hljs-number">2</span>.resize(img, ( width , height ))         img = img.astype(np.float<span class="hljs-number">32</span>)         img[:,:,<span class="hljs-number">0</span>] -= <span class="hljs-number">103</span>.<span class="hljs-number">939</span>         img[:,:,<span class="hljs-number">1</span>] -= <span class="hljs-number">116</span>.<span class="hljs-number">779</span>         img[:,:,<span class="hljs-number">2</span>] -= <span class="hljs-number">123</span>.<span class="hljs-number">68</span>         img = img[ : , : , ::-<span class="hljs-number">1</span> ]     elif imgNorm == <span class="hljs-string">"divide"</span>:         img = cv<span class="hljs-number">2</span>.resize(img, ( width , height ))         img = img.astype(np.float<span class="hljs-number">32</span>)         img = img/<span class="hljs-number">255</span>.<span class="hljs-number">0</span>     if odering == 'channels_first':         img = np.rollaxis(img, <span class="hljs-number">2</span>, <span class="hljs-number">0</span>)     return img def get_segmentation_array(image_input, nClasses, width, height, no_reshape=False): <span class="hljs-string">""</span><span class="hljs-string">" Load segmentation array from input "</span><span class="hljs-string">""</span> seg_labels = np.zeros((height, width, nClasses)) if type(image_input) is np.ndarray: # It is already an array, use it as it is img = image_input elif isinstance(image_input, six.string_types) : if not os.path.isfile(image_input): raise DataLoaderError(<span class="hljs-string">"get_segmentation_array: path {0} doesn't exist"</span>.format(image_input)) img = cv<span class="hljs-number">2</span>.imread(image_input, <span class="hljs-number">1</span>) else: raise DataLoaderError(<span class="hljs-string">"get_segmentation_array: Can't process input type {0}"</span>.format(str(type(image_input)))) img = cv<span class="hljs-number">2</span>.resize(img, (width, height), interpolation=cv<span class="hljs-number">2</span>.INTER_NEAREST) img = img[:, :, <span class="hljs-number">0</span>] for c in range(nClasses): seg_labels[:, :, c] = (img == c).astype(int) if not no_reshape: seg_labels = np.reshape(seg_labels, (width*height, nClasses)) return seg_labels 
</code></pre><p>In [14]:</p>
<pre><code>def image_segmentation_generator(images_path, segs_path, batch_size, n_classes, input_height, input_width, output_height, output_width, do_augment=<span class="hljs-keyword">False</span> ,augmentation_name="aug_all" ): img_seg_pairs = get_pairs_from_paths(images_path, segs_path) random.shuffle(img_seg_pairs) zipped = itertools.<span class="hljs-keyword">cycle</span>(img_seg_pairs) <span class="hljs-keyword">while</span> <span class="hljs-keyword">True</span>: X = [] Y = [] <span class="hljs-keyword">for</span> _ <span class="hljs-keyword">in</span> range(batch_size): im, seg = next(zipped) im = cv2.imread(im, <span class="hljs-number">1</span>) seg = cv2.imread(seg, <span class="hljs-number">1</span>) <span class="hljs-keyword">if</span> do_augment: im, seg[:, :, <span class="hljs-number">0</span>] = augment_seg(im, seg[:, :, <span class="hljs-number">0</span>] , augmentation_name=augmentation_name ) X.append(get_image_array(im, input_width, input_height, ordering=IMAGE_ORDERING)) Y.append(get_segmentation_array( seg, n_classes, output_width, output_height)) yield np.<span class="hljs-keyword">array</span>(X), np.<span class="hljs-keyword">array</span>(Y) 
</code></pre><p>In [15]:</p>
<pre><code>def get<span class="hljs-emphasis">_pairs_</span>from<span class="hljs-emphasis">_paths(images_</span>path, segs<span class="hljs-emphasis">_path, ignore_</span>non<span class="hljs-emphasis">_matching=False): """ Find all the images from the images_</span>path directory and the segmentation images from the segs<span class="hljs-emphasis">_path directory while checking integrity of data """ ACCEPTABLE_</span>IMAGE<span class="hljs-emphasis">_FORMATS = [<span class="hljs-string">".jpg", ".jpeg", ".png" , ".bmp"</span>] ACCEPTABLE_</span>SEGMENTATION<span class="hljs-emphasis">_FORMATS = [<span class="hljs-string">".png", ".bmp"</span>] image_</span>files = [<span class="hljs-string"></span>] segmentation<span class="hljs-emphasis">_files = {} for dir_</span>entry in os.listdir(images<span class="hljs-emphasis">_path): if os.path.isfile(os.path.join(images_</span>path, dir<span class="hljs-emphasis">_entry)) and \ os.path.splitext(dir_</span>entry)[<span class="hljs-string">1</span>] in ACCEPTABLE<span class="hljs-emphasis">_IMAGE_</span>FORMATS: file<span class="hljs-emphasis">_name, file_</span>extension = os.path.splitext(dir<span class="hljs-emphasis">_entry) image_</span>files.append((file<span class="hljs-emphasis">_name, file_</span>extension, os.path.join(images<span class="hljs-emphasis">_path, dir_</span>entry))) for dir<span class="hljs-emphasis">_entry in os.listdir(segs_</span>path): if os.path.isfile(os.path.join(segs<span class="hljs-emphasis">_path, dir_</span>entry)) and \ os.path.splitext(dir<span class="hljs-emphasis">_entry)[<span class="hljs-string">1</span>] in ACCEPTABLE_</span>SEGMENTATION<span class="hljs-emphasis">_FORMATS: file_</span>name, file<span class="hljs-emphasis">_extension = os.path.splitext(dir_</span>entry) if file<span class="hljs-emphasis">_name in segmentation_</span>files: raise DataLoaderError("Segmentation file with filename {0} already exists and is ambiguous to resolve with path {1}. Please remove or rename the latter.".format(file<span class="hljs-emphasis">_name, os.path.join(segs_</span>path, dir<span class="hljs-emphasis">_entry))) segmentation_</span>files[<span class="hljs-string">file_name</span>] = (file<span class="hljs-emphasis">_extension, os.path.join(segs_</span>path, dir<span class="hljs-emphasis">_entry)) return_</span>value = [<span class="hljs-string"></span>] # Match the images and segmentations for image<span class="hljs-emphasis">_file, _</span>, image<span class="hljs-emphasis">_full_</span>path in image<span class="hljs-emphasis">_files: if image_</span>file in segmentation<span class="hljs-emphasis">_files: return_</span>value.append((image<span class="hljs-emphasis">_full_</span>path, segmentation<span class="hljs-emphasis">_files[<span class="hljs-string">image_file</span>][<span class="hljs-symbol">1</span>])) elif ignore_</span>non<span class="hljs-emphasis">_matching: continue else: # Error out raise DataLoaderError("No corresponding segmentation found for image {0}.".format(image_</span>full<span class="hljs-emphasis">_path)) return return_</span>value 
</code></pre><p>In [16]:</p>
<pre><code>def verify_segmentation_dataset(images_path, segs_path, n_classes, show_all_errors=<span class="hljs-keyword">False</span>): try: img_seg_pairs = get_pairs_from_paths(images_path, segs_path) <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> len(img_seg_pairs): print("Couldn't load any data from images_path: {0} and segmentations path: {1}".format(images_path, segs_path)) <span class="hljs-keyword">return</span> <span class="hljs-keyword">False</span> return_value = <span class="hljs-keyword">True</span> <span class="hljs-keyword">for</span> im_fn, seg_fn <span class="hljs-keyword">in</span> tqdm(img_seg_pairs): img = cv2.imread(im_fn) seg = cv2.imread(seg_fn) # <span class="hljs-keyword">Check</span> dimensions match <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> img.shape == seg.shape: return_value = <span class="hljs-keyword">False</span> print("The size of image {0} and its segmentation {1} doesn't match (possibly the files are corrupt).".format(im_fn, seg_fn)) <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> show_all_errors: break <span class="hljs-keyword">else</span>: max_pixel_value = np.max(seg[:, :, <span class="hljs-number">0</span>]) <span class="hljs-keyword">if</span> max_pixel_value &gt;= n_classes: return_value = <span class="hljs-keyword">False</span> print("The pixel values of the segmentation image {0} violating range [0, {1}]. Found maximum pixel value {2}".format(seg_fn, str(n_classes - <span class="hljs-number">1</span>), max_pixel_value)) <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> show_all_errors: break <span class="hljs-keyword">if</span> return_value: print("Dataset verified! ") <span class="hljs-keyword">else</span>: print("Dataset not verified!") <span class="hljs-keyword">return</span> return_value <span class="hljs-keyword">except</span> <span class="hljs-keyword">Exception</span> <span class="hljs-keyword">as</span> e: print("Found error during data loading\n{0}".format(str(e))) <span class="hljs-keyword">return</span> <span class="hljs-keyword">False</span> 
</code></pre><p>In [17]:</p>
<pre><code>def evaluate( model=<span class="hljs-keyword">None</span> , inp_images=<span class="hljs-keyword">None</span> , annotations=<span class="hljs-keyword">None</span>,inp_images_dir=<span class="hljs-keyword">None</span> ,annotations_dir=<span class="hljs-keyword">None</span> , checkpoints_path=<span class="hljs-keyword">None</span> ): <span class="hljs-keyword">if</span> model <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span>: <span class="hljs-keyword">assert</span> (checkpoints_path <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>) , "Please provide the model or the checkpoints_path" model = model_from_checkpoint_path(checkpoints_path) <span class="hljs-keyword">if</span> inp_images <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span>: <span class="hljs-keyword">assert</span> (inp_images_dir <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>) , "Please privide inp_images or inp_images_dir" <span class="hljs-keyword">assert</span> (annotations_dir <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>) , "Please privide inp_images or inp_images_dir" paths = get_pairs_from_paths(inp_images_dir , annotations_dir ) paths = list(zip(*paths)) inp_images = list(paths[<span class="hljs-number">0</span>]) annotations = list(paths[<span class="hljs-number">1</span>]) <span class="hljs-keyword">assert</span> <span class="hljs-keyword">type</span>(inp_images) <span class="hljs-keyword">is</span> list <span class="hljs-keyword">assert</span> <span class="hljs-keyword">type</span>(annotations) <span class="hljs-keyword">is</span> list tp = np.zeros( model.n_classes ) fp = np.zeros( model.n_classes ) fn = np.zeros( model.n_classes ) n_pixels = np.zeros( model.n_classes ) <span class="hljs-keyword">for</span> inp , ann <span class="hljs-keyword">in</span> tqdm( zip( inp_images , annotations )): pr = predict(model , inp ) gt = get_segmentation_array( ann , model.n_classes , model.output_width , model.output_height , no_reshape=<span class="hljs-keyword">True</span> ) gt = gt.argmax(<span class="hljs-number">-1</span>) pr = pr.flatten() gt = gt.flatten() <span class="hljs-keyword">for</span> cl_i <span class="hljs-keyword">in</span> range(model.n_classes ): tp[ cl_i ] += np.sum( (pr == cl_i) * (gt == cl_i) ) fp[ cl_i ] += np.sum( (pr == cl_i) * ((gt != cl_i)) ) fn[ cl_i ] += np.sum( (pr != cl_i) * ((gt == cl_i)) ) n_pixels[ cl_i ] += np.sum( gt == cl_i ) cl_wise_score = tp / ( tp + fp + fn + <span class="hljs-number">0.000000000001</span> ) n_pixels_norm = n_pixels / np.sum(n_pixels) frequency_weighted_IU = np.sum(cl_wise_score*n_pixels_norm) mean_IU = np.mean(cl_wise_score) <span class="hljs-keyword">return</span> {"frequency_weighted_IU":frequency_weighted_IU , "mean_IU":mean_IU , "class_wise_IU":cl_wise_score } 
</code></pre><p>In [18]:</p>
<pre><code>def predict_multiple(model=<span class="hljs-keyword">None</span>, inps=<span class="hljs-keyword">None</span>, inp_dir=<span class="hljs-keyword">None</span>, out_dir=<span class="hljs-keyword">None</span>, checkpoints_path=<span class="hljs-keyword">None</span> ,overlay_img=<span class="hljs-keyword">False</span> , class_names=<span class="hljs-keyword">None</span> , show_legends=<span class="hljs-keyword">False</span> , colors=class_colors , prediction_width=<span class="hljs-keyword">None</span> , prediction_height=<span class="hljs-keyword">None</span> ): <span class="hljs-keyword">if</span> model <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span> <span class="hljs-keyword">and</span> (checkpoints_path <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>): model = model_from_checkpoint_path(checkpoints_path) <span class="hljs-keyword">if</span> inps <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span> <span class="hljs-keyword">and</span> (inp_dir <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>): inps = glob.glob(os.path.<span class="hljs-keyword">join</span>(inp_dir, "*.jpg")) + glob.glob( os.path.<span class="hljs-keyword">join</span>(inp_dir, "*.png")) + \ glob.glob(os.path.<span class="hljs-keyword">join</span>(inp_dir, "*.jpeg")) <span class="hljs-keyword">assert</span> <span class="hljs-keyword">type</span>(inps) <span class="hljs-keyword">is</span> list all_prs = [] <span class="hljs-keyword">for</span> i, inp <span class="hljs-keyword">in</span> enumerate(tqdm(inps)): <span class="hljs-keyword">if</span> out_dir <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span>: out_fname = <span class="hljs-keyword">None</span> <span class="hljs-keyword">else</span>: <span class="hljs-keyword">if</span> isinstance(inp, six.string_types): out_fname = os.path.<span class="hljs-keyword">join</span>(out_dir, os.path.basename(inp)) <span class="hljs-keyword">else</span>: out_fname = os.path.<span class="hljs-keyword">join</span>(out_dir, str(i) + ".jpg") pr = predict( model, inp, out_fname , overlay_img=overlay_img,class_names=class_names ,show_legends=show_legends , colors=colors , prediction_width=prediction_width , prediction_height=prediction_height ) all_prs.append(pr) <span class="hljs-keyword">return</span> all_prs 
</code></pre><p>In [19]:</p>
<pre><code>def predict(model=<span class="hljs-keyword">None</span>, inp=<span class="hljs-keyword">None</span>, out_fname=<span class="hljs-keyword">None</span>, checkpoints_path=<span class="hljs-keyword">None</span>,overlay_img=<span class="hljs-keyword">False</span> , class_names=<span class="hljs-keyword">None</span> , show_legends=<span class="hljs-keyword">False</span> , colors=class_colors , prediction_width=<span class="hljs-keyword">None</span> , prediction_height=<span class="hljs-keyword">None</span> ): <span class="hljs-keyword">if</span> model <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span> <span class="hljs-keyword">and</span> (checkpoints_path <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>): model = model_from_checkpoint_path(checkpoints_path) <span class="hljs-keyword">assert</span> (inp <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>) <span class="hljs-keyword">assert</span>((<span class="hljs-keyword">type</span>(inp) <span class="hljs-keyword">is</span> np.ndarray) <span class="hljs-keyword">or</span> isinstance(inp, six.string_types) ), "Inupt should be the CV image or the input file name" <span class="hljs-keyword">if</span> isinstance(inp, six.string_types): inp = cv2.imread(inp) <span class="hljs-keyword">assert</span> len(inp.shape) == <span class="hljs-number">3</span>, "Image should be h,w,3 " orininal_h = inp.shape[<span class="hljs-number">0</span>] orininal_w = inp.shape[<span class="hljs-number">1</span>] output_width = model.output_width output_height = model.output_height input_width = model.input_width input_height = model.input_height n_classes = model.n_classes x = get_image_array(inp, input_width, input_height, ordering=IMAGE_ORDERING) pr = model.predict(np.<span class="hljs-keyword">array</span>([x]))[<span class="hljs-number">0</span>] pr = pr.reshape((output_height, output_width, n_classes)).argmax(axis=<span class="hljs-number">2</span>) seg_img = visualize_segmentation( pr , inp ,n_classes=n_classes , colors=colors , overlay_img=overlay_img ,show_legends=show_legends ,class_names=class_names ,prediction_width=prediction_width , prediction_height=prediction_height ) <span class="hljs-keyword">if</span> out_fname <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>: cv2.imwrite(out_fname, seg_img) <span class="hljs-keyword">return</span> pr 
</code></pre><p>In [20]:</p>
<pre><code>def train(model, train_images, train_annotations, input_height=<span class="hljs-keyword">None</span>, input_width=<span class="hljs-keyword">None</span>, n_classes=<span class="hljs-keyword">None</span>, verify_dataset=<span class="hljs-keyword">True</span>, checkpoints_path=<span class="hljs-keyword">None</span>, epochs=<span class="hljs-number">5</span>, batch_size=<span class="hljs-number">2</span>, <span class="hljs-keyword">validate</span>=<span class="hljs-keyword">False</span>, val_images=<span class="hljs-keyword">None</span>, val_annotations=<span class="hljs-keyword">None</span>, val_batch_size=<span class="hljs-number">2</span>, auto_resume_checkpoint=<span class="hljs-keyword">False</span>, load_weights=<span class="hljs-keyword">None</span>, steps_per_epoch=<span class="hljs-number">512</span>, val_steps_per_epoch=<span class="hljs-number">512</span>, gen_use_multiprocessing=<span class="hljs-keyword">False</span>, ignore_zero_class=<span class="hljs-keyword">False</span> , optimizer_name=<span class="hljs-string">'adadelta'</span> , do_augment=<span class="hljs-keyword">False</span> , augmentation_name="aug_all" ): # <span class="hljs-keyword">check</span> <span class="hljs-keyword">if</span> <span class="hljs-keyword">user</span> gives model <span class="hljs-type">name</span> <span class="hljs-keyword">instead</span> <span class="hljs-keyword">of</span> the model <span class="hljs-keyword">object</span> <span class="hljs-keyword">if</span> isinstance(model, six.string_types): # <span class="hljs-keyword">create</span> the model <span class="hljs-keyword">from</span> the <span class="hljs-type">name</span> <span class="hljs-keyword">assert</span> (n_classes <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>), "Please provide the n_classes" <span class="hljs-keyword">if</span> (input_height <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>) <span class="hljs-keyword">and</span> (input_width <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>): model = model_from_name[model]( n_classes, input_height=input_height, input_width=input_width) <span class="hljs-keyword">else</span>: model = model_from_name[model](n_classes) n_classes = model.n_classes input_height = model.input_height input_width = model.input_width output_height = model.output_height output_width = model.output_width <span class="hljs-keyword">if</span> <span class="hljs-keyword">validate</span>: <span class="hljs-keyword">assert</span> val_images <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span> <span class="hljs-keyword">assert</span> val_annotations <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span> <span class="hljs-keyword">if</span> optimizer_name <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>: <span class="hljs-keyword">if</span> ignore_zero_class: loss_k = masked_categorical_crossentropy <span class="hljs-keyword">else</span>: loss_k = <span class="hljs-string">'categorical_crossentropy'</span> model.compile(loss= loss_k , optimizer=optimizer_name, metrics=[<span class="hljs-string">'accuracy'</span>]) <span class="hljs-keyword">if</span> checkpoints_path <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>: <span class="hljs-keyword">with</span> <span class="hljs-keyword">open</span>(checkpoints_path+"_config.json", "w") <span class="hljs-keyword">as</span> f: <span class="hljs-type">json</span>.dump({ "model_class": model.model_name, "n_classes": n_classes, "input_height": input_height, "input_width": input_width, "output_height": output_height, "output_width": output_width }, f) <span class="hljs-keyword">if</span> load_weights <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span> <span class="hljs-keyword">and</span> len(load_weights) &gt; <span class="hljs-number">0</span>: print("Loading weights from ", load_weights) model.load_weights(load_weights) <span class="hljs-keyword">if</span> auto_resume_checkpoint <span class="hljs-keyword">and</span> (checkpoints_path <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>): latest_checkpoint = find_latest_checkpoint(checkpoints_path) <span class="hljs-keyword">if</span> latest_checkpoint <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>: print("Loading the weights from latest checkpoint ", latest_checkpoint) model.load_weights(latest_checkpoint) <span class="hljs-keyword">if</span> verify_dataset: print("Verifying training dataset") verified = verify_segmentation_dataset(train_images, train_annotations, n_classes) <span class="hljs-keyword">assert</span> verified <span class="hljs-keyword">if</span> <span class="hljs-keyword">validate</span>: print("Verifying validation dataset") verified = verify_segmentation_dataset(val_images, val_annotations, n_classes) <span class="hljs-keyword">assert</span> verified train_gen = image_segmentation_generator( train_images, train_annotations, batch_size, n_classes, input_height, input_width, output_height, output_width , do_augment=do_augment ,augmentation_name=augmentation_name ) <span class="hljs-keyword">if</span> <span class="hljs-keyword">validate</span>: val_gen = image_segmentation_generator( val_images, val_annotations, val_batch_size, n_classes, input_height, input_width, output_height, output_width) <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">validate</span>: <span class="hljs-keyword">for</span> ep <span class="hljs-keyword">in</span> range(epochs): print("Starting Epoch ", ep) model.fit_generator(train_gen, steps_per_epoch, epochs=<span class="hljs-number">1</span>, use_multiprocessing=<span class="hljs-keyword">True</span>) <span class="hljs-keyword">if</span> checkpoints_path <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>: model.save_weights(checkpoints_path + "." + str(ep)) print("saved ", checkpoints_path + ".model." + str(ep)) print("Finished Epoch", ep) <span class="hljs-keyword">else</span>: <span class="hljs-keyword">for</span> ep <span class="hljs-keyword">in</span> range(epochs): print("Starting Epoch ", ep) model.fit_generator(train_gen, steps_per_epoch, validation_data=val_gen, validation_steps=val_steps_per_epoch, epochs=<span class="hljs-number">1</span> , use_multiprocessing=gen_use_multiprocessing) <span class="hljs-keyword">if</span> checkpoints_path <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">None</span>: model.save_weights(checkpoints_path + "." + str(ep)) print("saved ", checkpoints_path + ".model." + str(ep)) print("Finished Epoch", ep) 
</code></pre><p>In [21]:</p>
<pre><code><span class="hljs-attribute">def</span> get_segmentation_model(input, output): img_input = input o = output o_shape = Model(img_input, o).output_shape i_shape = Model(img_input, o).input_shape if IMAGE_ORDERING == 'channels_first': output_height = o_shape[<span class="hljs-number">2</span>] output_width = o_shape[<span class="hljs-number">3</span>] input_height = i_shape[<span class="hljs-number">2</span>] input_width = i_shape[<span class="hljs-number">3</span>] n_classes = o_shape[<span class="hljs-number">1</span>] o = (Reshape((-<span class="hljs-number">1</span>, output_height*output_width)))(o) o = (Permute((<span class="hljs-number">2</span>, <span class="hljs-number">1</span>)))(o) elif IMAGE_ORDERING == 'channels_last': output_height = o_shape[<span class="hljs-number">1</span>] output_width = o_shape[<span class="hljs-number">2</span>] input_height = i_shape[<span class="hljs-number">1</span>] input_width = i_shape[<span class="hljs-number">2</span>] n_classes = o_shape[<span class="hljs-number">3</span>] o = (Reshape((output_height*output_width, -<span class="hljs-number">1</span>)))(o) o = (Activation('softmax'))(o) model = Model(img_input, o) model.output_width = output_width model.output_height = output_height model.n_classes = n_classes model.input_height = input_height model.input_width = input_width model.model_name = <span class="hljs-string">""</span> model.train = MethodType(train, model) model.predict_segmentation = MethodType(predict, model) model.predict_multiple = MethodType(predict_multiple, model) model.evaluate_segmentation = MethodType(evaluate, model) return model 
</code></pre><p>In [22]:</p>
<pre><code><span class="hljs-attribute">def</span> get_vgg_encoder(input_height=<span class="hljs-number">224</span>, input_width=<span class="hljs-number">224</span>, pretrained='imagenet'): assert input_height % <span class="hljs-number">32</span> == <span class="hljs-number">0</span> assert input_width % <span class="hljs-number">32</span> == <span class="hljs-number">0</span> if IMAGE_ORDERING == 'channels_first': img_input = Input(shape=(<span class="hljs-number">3</span>, input_height, input_width)) elif IMAGE_ORDERING == 'channels_last': img_input = Input(shape=(input_height, input_width, <span class="hljs-number">3</span>)) x = Conv<span class="hljs-number">2</span>D(<span class="hljs-number">64</span>, (<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), activation='relu', padding='same', name='block<span class="hljs-number">1</span>_conv<span class="hljs-number">1</span>', data_format=IMAGE_ORDERING)(img_input) x = Conv<span class="hljs-number">2</span>D(<span class="hljs-number">64</span>, (<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), activation='relu', padding='same', name='block<span class="hljs-number">1</span>_conv<span class="hljs-number">2</span>', data_format=IMAGE_ORDERING)(x) x = MaxPooling<span class="hljs-number">2</span>D((<span class="hljs-number">2</span>, <span class="hljs-number">2</span>), strides=(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>), name='block<span class="hljs-number">1</span>_pool', data_format=IMAGE_ORDERING)(x) f<span class="hljs-number">1</span> = x # Block <span class="hljs-number">2</span> x = Conv<span class="hljs-number">2</span>D(<span class="hljs-number">128</span>, (<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), activation='relu', padding='same', name='block<span class="hljs-number">2</span>_conv<span class="hljs-number">1</span>', data_format=IMAGE_ORDERING)(x) x = Conv<span class="hljs-number">2</span>D(<span class="hljs-number">128</span>, (<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), activation='relu', padding='same', name='block<span class="hljs-number">2</span>_conv<span class="hljs-number">2</span>', data_format=IMAGE_ORDERING)(x) x = MaxPooling<span class="hljs-number">2</span>D((<span class="hljs-number">2</span>, <span class="hljs-number">2</span>), strides=(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>), name='block<span class="hljs-number">2</span>_pool', data_format=IMAGE_ORDERING)(x) f<span class="hljs-number">2</span> = x # Block <span class="hljs-number">3</span> x = Conv<span class="hljs-number">2</span>D(<span class="hljs-number">256</span>, (<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), activation='relu', padding='same', name='block<span class="hljs-number">3</span>_conv<span class="hljs-number">1</span>', data_format=IMAGE_ORDERING)(x) x = Conv<span class="hljs-number">2</span>D(<span class="hljs-number">256</span>, (<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), activation='relu', padding='same', name='block<span class="hljs-number">3</span>_conv<span class="hljs-number">2</span>', data_format=IMAGE_ORDERING)(x) x = Conv<span class="hljs-number">2</span>D(<span class="hljs-number">256</span>, (<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), activation='relu', padding='same', name='block<span class="hljs-number">3</span>_conv<span class="hljs-number">3</span>', data_format=IMAGE_ORDERING)(x) x = MaxPooling<span class="hljs-number">2</span>D((<span class="hljs-number">2</span>, <span class="hljs-number">2</span>), strides=(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>), name='block<span class="hljs-number">3</span>_pool', data_format=IMAGE_ORDERING)(x) f<span class="hljs-number">3</span> = x # Block <span class="hljs-number">4</span> x = Conv<span class="hljs-number">2</span>D(<span class="hljs-number">512</span>, (<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), activation='relu', padding='same', name='block<span class="hljs-number">4</span>_conv<span class="hljs-number">1</span>', data_format=IMAGE_ORDERING)(x) x = Conv<span class="hljs-number">2</span>D(<span class="hljs-number">512</span>, (<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), activation='relu', padding='same', name='block<span class="hljs-number">4</span>_conv<span class="hljs-number">2</span>', data_format=IMAGE_ORDERING)(x) x = Conv<span class="hljs-number">2</span>D(<span class="hljs-number">512</span>, (<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), activation='relu', padding='same', name='block<span class="hljs-number">4</span>_conv<span class="hljs-number">3</span>', data_format=IMAGE_ORDERING)(x) x = MaxPooling<span class="hljs-number">2</span>D((<span class="hljs-number">2</span>, <span class="hljs-number">2</span>), strides=(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>), name='block<span class="hljs-number">4</span>_pool', data_format=IMAGE_ORDERING)(x) f<span class="hljs-number">4</span> = x # Block <span class="hljs-number">5</span> x = Conv<span class="hljs-number">2</span>D(<span class="hljs-number">512</span>, (<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), activation='relu', padding='same', name='block<span class="hljs-number">5</span>_conv<span class="hljs-number">1</span>', data_format=IMAGE_ORDERING)(x) x = Conv<span class="hljs-number">2</span>D(<span class="hljs-number">512</span>, (<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), activation='relu', padding='same', name='block<span class="hljs-number">5</span>_conv<span class="hljs-number">2</span>', data_format=IMAGE_ORDERING)(x) x = Conv<span class="hljs-number">2</span>D(<span class="hljs-number">512</span>, (<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), activation='relu', padding='same', name='block<span class="hljs-number">5</span>_conv<span class="hljs-number">3</span>', data_format=IMAGE_ORDERING)(x) x = MaxPooling<span class="hljs-number">2</span>D((<span class="hljs-number">2</span>, <span class="hljs-number">2</span>), strides=(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>), name='block<span class="hljs-number">5</span>_pool', data_format=IMAGE_ORDERING)(x) f<span class="hljs-number">5</span> = x if pretrained == 'imagenet': VGG_Weights_path = keras.utils.get_file(pretrained_url.split(<span class="hljs-string">"/"</span>)[-<span class="hljs-number">1</span>], pretrained_url) Model(img_input, x).load_weights(VGG_Weights_path) return img_input,<span class="hljs-meta"> [f1, f2, f3, f4, f5] </span>
</code></pre><p>In [23]:</p>
<pre><code><span class="hljs-string">def</span> <span class="hljs-string">_unet(n_classes,</span> <span class="hljs-string">encoder,</span> <span class="hljs-string">l1_skip_conn=True,</span> <span class="hljs-string">input_height=416,</span> <span class="hljs-string">input_width=608):</span> <span class="hljs-string">img_input,</span> <span class="hljs-string">levels</span> <span class="hljs-string">=</span> <span class="hljs-string">encoder(</span> <span class="hljs-string">input_height=input_height,</span> <span class="hljs-string">input_width=input_width)</span> [<span class="hljs-string">f1</span>, <span class="hljs-string">f2</span>, <span class="hljs-string">f3</span>, <span class="hljs-string">f4</span>, <span class="hljs-string">f5</span>] <span class="hljs-string">=</span> <span class="hljs-string">levels</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">f4</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(ZeroPadding2D((1,</span> <span class="hljs-number">1</span><span class="hljs-string">),</span> <span class="hljs-string">data_format=IMAGE_ORDERING))(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(Conv2D(512,</span> <span class="hljs-string">(3,</span> <span class="hljs-number">3</span><span class="hljs-string">),</span> <span class="hljs-string">padding='valid',</span> <span class="hljs-string">data_format=IMAGE_ORDERING))(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(BatchNormalization())(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(UpSampling2D((2,</span> <span class="hljs-number">2</span><span class="hljs-string">),</span> <span class="hljs-string">data_format=IMAGE_ORDERING))(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(concatenate([o,</span> <span class="hljs-string">f3],</span> <span class="hljs-string">axis=MERGE_AXIS))</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(ZeroPadding2D((1,</span> <span class="hljs-number">1</span><span class="hljs-string">),</span> <span class="hljs-string">data_format=IMAGE_ORDERING))(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(Conv2D(256,</span> <span class="hljs-string">(3,</span> <span class="hljs-number">3</span><span class="hljs-string">),</span> <span class="hljs-string">padding='valid',</span> <span class="hljs-string">data_format=IMAGE_ORDERING))(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(BatchNormalization())(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(UpSampling2D((2,</span> <span class="hljs-number">2</span><span class="hljs-string">),</span> <span class="hljs-string">data_format=IMAGE_ORDERING))(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(concatenate([o,</span> <span class="hljs-string">f2],</span> <span class="hljs-string">axis=MERGE_AXIS))</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(ZeroPadding2D((1,</span> <span class="hljs-number">1</span><span class="hljs-string">),</span> <span class="hljs-string">data_format=IMAGE_ORDERING))(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(Conv2D(128,</span> <span class="hljs-string">(3,</span> <span class="hljs-number">3</span><span class="hljs-string">),</span> <span class="hljs-string">padding='valid',</span> <span class="hljs-string">data_format=IMAGE_ORDERING))(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(BatchNormalization())(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(UpSampling2D((2,</span> <span class="hljs-number">2</span><span class="hljs-string">),</span> <span class="hljs-string">data_format=IMAGE_ORDERING))(o)</span> <span class="hljs-attr">if l1_skip_conn:</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(concatenate([o,</span> <span class="hljs-string">f1],</span> <span class="hljs-string">axis=MERGE_AXIS))</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(ZeroPadding2D((1,</span> <span class="hljs-number">1</span><span class="hljs-string">),</span> <span class="hljs-string">data_format=IMAGE_ORDERING))(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(Conv2D(64,</span> <span class="hljs-string">(3,</span> <span class="hljs-number">3</span><span class="hljs-string">),</span> <span class="hljs-string">padding='valid',</span> <span class="hljs-string">data_format=IMAGE_ORDERING))(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">(BatchNormalization())(o)</span> <span class="hljs-string">o</span> <span class="hljs-string">=</span> <span class="hljs-string">Conv2D(n_classes,</span> <span class="hljs-string">(3,</span> <span class="hljs-number">3</span><span class="hljs-string">),</span> <span class="hljs-string">padding='same',data_format=IMAGE_ORDERING)(o)</span> <span class="hljs-string">model</span> <span class="hljs-string">=</span> <span class="hljs-string">get_segmentation_model(img_input,</span> <span class="hljs-string">o)</span> <span class="hljs-string">return</span> <span class="hljs-string">model</span> 
</code></pre><p>In [24]:</p>
<pre><code><span class="hljs-attribute">def</span> vgg_unet(n_classes, input_height=<span class="hljs-number">416</span>, input_width=<span class="hljs-number">608</span>, encoder_level=<span class="hljs-number">3</span>): model = _unet(n_classes, get_vgg_encoder,input_height=input_height, input_width=input_width) model.model_name = <span class="hljs-string">"vgg_unet"</span> return model n_classes = <span class="hljs-number">23</span> # Aerial Semantic Segmentation Drone Dataset tree, gras, other vegetation, dirt, gravel, rocks, water, paved area, pool, person, dog, car, bicycle, roof, wall, fence, fence-pole, window, door, obstacle model = vgg_unet(n_classes=n_classes, input_height=<span class="hljs-number">416</span>, input_width=<span class="hljs-number">608</span>) model_from_name = {} model_from_name[<span class="hljs-string">"vgg_unet"</span>] = vgg_unet 
</code></pre><h3 id="trainhttpswwwkagglecombulentsiyahdeep-learning-based-semantic-segmentation-kerastrain">Train<a target="_blank" href="https://www.kaggle.com/bulentsiyah/deep-learning-based-semantic-segmentation-keras#Train"></a></h3>
<p>In [25]:</p>
<pre><code><span class="hljs-string">kaggle_commit</span> <span class="hljs-string">=</span> <span class="hljs-literal">True</span> <span class="hljs-string">epochs</span> <span class="hljs-string">=</span> <span class="hljs-attr">20 if kaggle_commit:</span> <span class="hljs-string">epochs</span> <span class="hljs-string">=</span> <span class="hljs-number">5</span> 
</code></pre><p>In [26]:</p>
<pre><code><span class="hljs-string">model.train(</span> <span class="hljs-string">train_images</span> <span class="hljs-string">=</span> <span class="hljs-string">"/kaggle/input/semantic-drone-dataset/dataset/semantic_drone_dataset/original_images/"</span><span class="hljs-string">,</span> <span class="hljs-string">train_annotations</span> <span class="hljs-string">=</span> <span class="hljs-string">"/kaggle/input/semantic-drone-dataset/dataset/semantic_drone_dataset/label_images_semantic/"</span><span class="hljs-string">,</span> <span class="hljs-string">checkpoints_path</span> <span class="hljs-string">=</span> <span class="hljs-string">"vgg_unet"</span> <span class="hljs-string">,</span> <span class="hljs-string">epochs=epochs</span> <span class="hljs-string">)</span> 

 <span class="hljs-number">0</span><span class="hljs-string">%|</span> <span class="hljs-string">|</span> <span class="hljs-number">0</span><span class="hljs-string">/400</span> [<span class="hljs-number">00</span><span class="hljs-string">:00&lt;?</span>, <span class="hljs-string">?it/s</span>]

<span class="hljs-string">Verifying</span> <span class="hljs-string">training</span> <span class="hljs-string">dataset</span> 

<span class="hljs-number">100</span><span class="hljs-string">%|██████████|</span> <span class="hljs-number">400</span><span class="hljs-string">/400</span> [<span class="hljs-number">04</span><span class="hljs-string">:14&lt;00:00</span>, <span class="hljs-number">1.</span><span class="hljs-string">57it/s</span>] 

<span class="hljs-string">Dataset</span> <span class="hljs-string">verified!</span> <span class="hljs-string">Starting</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">0</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">1</span><span class="hljs-string">/1</span> <span class="hljs-number">512</span><span class="hljs-string">/512</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">751s 1s/step - loss: 1.5045 - accuracy:</span> <span class="hljs-number">0.5869</span> <span class="hljs-string">saved</span> <span class="hljs-string">vgg_unet.model.0</span> <span class="hljs-string">Finished</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">0</span> <span class="hljs-string">Starting</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">1</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">1</span><span class="hljs-string">/1</span> <span class="hljs-number">512</span><span class="hljs-string">/512</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">756s 1s/step - loss: 1.1737 - accuracy:</span> <span class="hljs-number">0.6511</span> <span class="hljs-string">saved</span> <span class="hljs-string">vgg_unet.model.1</span> <span class="hljs-string">Finished</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">1</span> <span class="hljs-string">Starting</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">2</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">1</span><span class="hljs-string">/1</span> <span class="hljs-number">512</span><span class="hljs-string">/512</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">730s 1s/step - loss: 1.0721 - accuracy:</span> <span class="hljs-number">0.6765</span> <span class="hljs-string">saved</span> <span class="hljs-string">vgg_unet.model.2</span> <span class="hljs-string">Finished</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">2</span> <span class="hljs-string">Starting</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">3</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">1</span><span class="hljs-string">/1</span> <span class="hljs-number">512</span><span class="hljs-string">/512</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">759s 1s/step - loss: 1.0045 - accuracy:</span> <span class="hljs-number">0.6981</span> <span class="hljs-string">saved</span> <span class="hljs-string">vgg_unet.model.3</span> <span class="hljs-string">Finished</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">3</span> <span class="hljs-string">Starting</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">4</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">1</span><span class="hljs-string">/1</span> <span class="hljs-number">512</span><span class="hljs-string">/512</span> [<span class="hljs-string">==============================</span>] <span class="hljs-bullet">-</span> <span class="hljs-attr">759s 1s/step - loss: 0.9506 - accuracy:</span> <span class="hljs-number">0.7152</span> <span class="hljs-string">saved</span> <span class="hljs-string">vgg_unet.model.4</span> <span class="hljs-string">Finished</span> <span class="hljs-string">Epoch</span> <span class="hljs-number">4</span>
</code></pre>]]></content:encoded></item><item><title><![CDATA[Learn OpenCV by Examples - with Python]]></title><description><![CDATA[About OpenCV

Officially launched in 1999, OpenCV (Open Source Computer Vision) from an Intel initiative.
OpenCV's core is written in C++. In python we are simply using a wrapper that executes C++ code inside of python.
First major release 1.0 was in...]]></description><link>https://www.bulentsiyah.com/learn-opencv-by-examples-with-python</link><guid isPermaLink="true">https://www.bulentsiyah.com/learn-opencv-by-examples-with-python</guid><category><![CDATA[Deep Learning]]></category><category><![CDATA[opencv]]></category><category><![CDATA[Computer Vision]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Wed, 05 Feb 2020 13:02:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1611492875961/DVlH0D4UV.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="about-opencv"><strong>About OpenCV</strong></h1>
<ul>
<li>Officially launched in 1999, OpenCV (Open Source Computer Vision) from an Intel initiative.</li>
<li>OpenCV's core is written in C++. In python we are simply using a wrapper that executes C++ code inside of python.</li>
<li>First major release 1.0 was in 2006, second in 2009, third in 2015 and 4th in 2018. with OpenCV 4.0 Beta.</li>
<li>It is an Open source library containing over 2500 optimized algorithms.</li>
<li>It is EXTREMELY useful for almost all computer vision applications and is supported on Windows, Linux, MacOS, Android, iOS with bindings to Python, Java and Matlab.</li>
</ul>
<h2 id="update19052020httpswwwkagglecombulentsiyahlearn-opencv-by-examples-with-pythonupdate19052020">Update(19.05.2020)<a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#Update(19.05.2020"></a>)</h2>
<p>I will always try to improve this kernel. I made some additions to this version. Thanks for reading, I hope it will be useful</p>
<h4 id="newly-added-contenthttpswwwkagglecombulentsiyahlearn-opencv-by-examples-with-pythonnewly-added-content">Newly Added Content<a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#Newly-Added-Content"></a></h4>
<ul>
<li>17.Background Subtraction Methods</li>
<li>18.Funny Mirrors Using OpenCV</li>
</ul>
<h1 id="content"><strong>Content</strong></h1>
<ol>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#1.">Sharpening</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#2.">Thresholding, Binarization &amp; Adaptive Thresholding</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#3.">Dilation, Erosion, Opening and Closing</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#4.">Edge Detection &amp; Image Gradients</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#5.">Perpsective Transform</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#6.">Scaling, re-sizing and interpolations</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#7.">Image Pyramids</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#8.">Cropping</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#9.">Blurring</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#10.">Contours</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#11.">Approximating Contours and Convex Hull</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#12.">Identifiy Contours by Shape</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#13.">Line Detection - Using Hough Lines</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#14.">Counting Circles and Ellipses</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#15.">Finding Corners</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#16.">Finding Waldo</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#17.">Background Subtraction Methods</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#18.">Funny Mirrors Using OpenCV</a></li>
</ol>
<h3 id="background-subtraction-methods-outputhttpswwwkagglecombulentsiyahlearn-opencv-by-examples-with-pythonbackground-subtraction-methods-output">Background Subtraction Methods Output<a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#Background-Subtraction-Methods-Output"></a></h3>
<p><img src="https://iili.io/JMXhdv.gif" alt /></p>
<h3 id="funny-mirrors-using-opencv-outputhttpswwwkagglecombulentsiyahlearn-opencv-by-examples-with-pythonfunny-mirrors-using-opencv-output">Funny Mirrors Using OpenCV Output<a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#Funny-Mirrors-Using-OpenCV-Output"></a></h3>
<p><img src="https://iili.io/JMw3qF.png" alt /></p>
<h3 id="some-pictures-from-contenthttpswwwkagglecombulentsiyahlearn-opencv-by-examples-with-pythonsome-pictures-from-content">Some pictures from content<a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#Some-pictures-from-content"></a></h3>
<p><img src="https://iili.io/JMXPkl.png" alt /></p>
<p>In [1]:</p>
<pre><code><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-keyword">import</span> cv2 
</code></pre><p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#1.Sharpening"></a></p>
<p>By altering our kernels we can implement sharpening, which has the effects of in strengthening or emphasizing edges in an image.</p>
<p>In [2]:</p>
<pre><code><span class="hljs-attribute">image</span> = cv<span class="hljs-number">2</span>.imread('/kaggle/input/opencv-samples-images/data/building.jpg') image = cv<span class="hljs-number">2</span>.cvtColor(image, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) plt.subplot(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Original"</span>) plt.imshow(image) # Create our shapening kernel, we don't normalize since the # the values in the matrix sum to <span class="hljs-number">1</span> kernel_sharpening = np.array([[-<span class="hljs-number">1</span>,-<span class="hljs-number">1</span>,-<span class="hljs-number">1</span>],<span class="hljs-meta"> [-1,9,-1], [-1,-1,-1]]) # applying different kernels to the input image sharpened = cv2.filter2D(image, -1, kernel_sharpening) plt.subplot(1, 2, 2) plt.title("Image Sharpening") plt.imshow(sharpened) plt.show() </span>
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___4_0.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#2.Thresholding,-Binarization-&amp;-Adaptive-Thresholding"></a></p>
<p>In [3]:</p>
<pre><code># <span class="hljs-keyword">Load</span> our <span class="hljs-built_in">new</span> image image = cv2.imread(<span class="hljs-string">'/kaggle/input/opencv-samples-images/Origin_of_Species.jpg'</span>, <span class="hljs-number">0</span>) plt.figure(figsize=(<span class="hljs-number">30</span>, <span class="hljs-number">30</span>)) plt.subplot(<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title("Original") plt.imshow(image) # <span class="hljs-keyword">Values</span> below <span class="hljs-number">127</span> goes <span class="hljs-keyword">to</span> <span class="hljs-number">0</span> (black, everything above goes <span class="hljs-keyword">to</span> <span class="hljs-number">255</span> (white) ret,thresh1 = cv2.threshold(image, <span class="hljs-number">127</span>, <span class="hljs-number">255</span>, cv2.THRESH_BINARY) plt.subplot(<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title("Threshold Binary") plt.imshow(thresh1) # It<span class="hljs-string">'s good practice to blur images as it removes noise image = cv2.GaussianBlur(image, (3, 3), 0) # Using adaptiveThreshold thresh = cv2.adaptiveThreshold(image, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 3, 5) plt.subplot(3, 2, 3) plt.title("Adaptive Mean Thresholding") plt.imshow(thresh) _, th2 = cv2.threshold(image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) plt.subplot(3, 2, 4) plt.title("Otsu'</span>s Thresholding") plt.imshow(th2) plt.subplot(3, 2, 5) # Otsu's thresholding after Gaussian filtering blur = cv2.GaussianBlur(image, (5,5), 0) _, th3 = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) plt.title("Guassian Otsu<span class="hljs-string">'s Thresholding") plt.imshow(th3) plt.show() </span>
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___6_0.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#3.Dilation,-Erosion,-Opening-and-Closing"></a></p>
<p>In [4]:</p>
<pre><code><span class="hljs-attribute">image</span> = cv<span class="hljs-number">2</span>.imread('/kaggle/input/opencv-samples-images/data/LinuxLogo.jpg') image = cv<span class="hljs-number">2</span>.cvtColor(image, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) plt.subplot(<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Original"</span>) plt.imshow(image) # Let's define our kernel size kernel = np.ones((<span class="hljs-number">5</span>,<span class="hljs-number">5</span>), np.uint<span class="hljs-number">8</span>) # Now we erode erosion = cv<span class="hljs-number">2</span>.erode(image, kernel, iterations = <span class="hljs-number">1</span>) plt.subplot(<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title(<span class="hljs-string">"Erosion"</span>) plt.imshow(erosion) # dilation = cv<span class="hljs-number">2</span>.dilate(image, kernel, iterations = <span class="hljs-number">1</span>) plt.subplot(<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>) plt.title(<span class="hljs-string">"Dilation"</span>) plt.imshow(dilation) # Opening - Good for removing noise opening = cv<span class="hljs-number">2</span>.morphologyEx(image, cv<span class="hljs-number">2</span>.MORPH_OPEN, kernel) plt.subplot(<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">4</span>) plt.title(<span class="hljs-string">"Opening"</span>) plt.imshow(opening) # Closing - Good for removing noise closing = cv<span class="hljs-number">2</span>.morphologyEx(image, cv<span class="hljs-number">2</span>.MORPH_CLOSE, kernel) plt.subplot(<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">5</span>) plt.title(<span class="hljs-string">"Closing"</span>) plt.imshow(closing) 
</code></pre><p>Out[4]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa9340f9f60</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___8_1.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#4.Edge-Detection-&amp;-Image-Gradients"></a></p>
<p>In [5]:</p>
<pre><code><span class="hljs-attribute">image</span> = cv<span class="hljs-number">2</span>.imread('/kaggle/input/opencv-samples-images/data/fruits.jpg') image = cv<span class="hljs-number">2</span>.cvtColor(image, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB) height, width,_ = image.shape # Extract Sobel Edges sobel_x = cv<span class="hljs-number">2</span>.Sobel(image, cv<span class="hljs-number">2</span>.CV_<span class="hljs-number">64</span>F, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>, ksize=<span class="hljs-number">5</span>) sobel_y = cv<span class="hljs-number">2</span>.Sobel(image, cv<span class="hljs-number">2</span>.CV_<span class="hljs-number">64</span>F, <span class="hljs-number">1</span>, <span class="hljs-number">0</span>, ksize=<span class="hljs-number">5</span>) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) plt.subplot(<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Original"</span>) plt.imshow(image) plt.subplot(<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title(<span class="hljs-string">"Sobel X"</span>) plt.imshow(sobel_x) plt.subplot(<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>) plt.title(<span class="hljs-string">"Sobel Y"</span>) plt.imshow(sobel_y) sobel_OR = cv<span class="hljs-number">2</span>.bitwise_or(sobel_x, sobel_y) plt.subplot(<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">4</span>) plt.title(<span class="hljs-string">"sobel_OR"</span>) plt.imshow(sobel_OR) laplacian = cv<span class="hljs-number">2</span>.Laplacian(image, cv<span class="hljs-number">2</span>.CV_<span class="hljs-number">64</span>F) plt.subplot(<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">5</span>) plt.title(<span class="hljs-string">"Laplacian"</span>) plt.imshow(laplacian) ## Then, we need to provide two values: threshold<span class="hljs-number">1</span> and threshold<span class="hljs-number">2</span>. Any gradient value larger than threshold<span class="hljs-number">2</span> # is considered to be an edge. Any value below threshold<span class="hljs-number">1</span> is considered not to be an edge. #Values in between threshold<span class="hljs-number">1</span> and threshold<span class="hljs-number">2</span> are either classiﬁed as edges or non-edges based <span class="hljs-literal">on</span> how their #intensities are “connected”. In this case, any gradient values below <span class="hljs-number">60</span> are considered non-edges #whereas any values above <span class="hljs-number">120</span> are considered edges. # Canny Edge Detection uses gradient values as thresholds # The first threshold gradient canny = cv<span class="hljs-number">2</span>.Canny(image, <span class="hljs-number">50</span>, <span class="hljs-number">120</span>) plt.subplot(<span class="hljs-number">3</span>, <span class="hljs-number">2</span>, <span class="hljs-number">6</span>) plt.title(<span class="hljs-string">"Canny"</span>) plt.imshow(canny) 
</code></pre><p>Out[5]:</p>
<pre><code><span class="hljs-string">&lt;matplotlib.image.AxesImage</span> <span class="hljs-string">at</span> <span class="hljs-number">0x7fa925f28358</span><span class="hljs-string">&gt;</span>

<span class="hljs-string">/opt/conda/lib/python3.6/site-packages/matplotlib/cm.py:273:</span> <span class="hljs-attr">RuntimeWarning:</span> <span class="hljs-string">invalid</span> <span class="hljs-string">value</span> <span class="hljs-string">encountered</span> <span class="hljs-string">in</span> <span class="hljs-string">multiply</span> <span class="hljs-string">xx</span> <span class="hljs-string">=</span> <span class="hljs-string">(xx</span> <span class="hljs-string">*</span> <span class="hljs-number">255</span><span class="hljs-string">).astype(np.uint8)</span> 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___10_2.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#5.Perpsective-Transform"></a></p>
<p>In [6]:</p>
<pre><code><span class="hljs-attribute">image</span> = cv<span class="hljs-number">2</span>.imread('/kaggle/input/opencv-samples-images/scan.jpg') image = cv<span class="hljs-number">2</span>.cvtColor(image, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) plt.subplot(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Original"</span>) plt.imshow(image) # Cordinates of the <span class="hljs-number">4</span> corners of the original image points_A = np.float<span class="hljs-number">32</span>([[<span class="hljs-number">320</span>,<span class="hljs-number">15</span>],<span class="hljs-meta"> [700,215], [85,610], [530,780]]) # Cordinates of the 4 corners of the desired output # We use a ratio of an A4 Paper 1 : 1.41 points_B = np.float32([[0,0], [420,0], [0,594], [420,594]]) # Use the two sets of four points to compute # the Perspective Transformation matrix, M M = cv2.getPerspectiveTransform(points_A, points_B) warped = cv2.warpPerspective(image, M, (420,594)) plt.subplot(1, 2, 2) plt.title("warpPerspective") plt.imshow(warped) </span>
</code></pre><p>Out[6]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa9374e1908</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___12_1.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#6.Scaling,-re-sizing-and-interpolations"></a></p>
<p>Re-sizing is very easy using the cv2.resize function, it's arguments are: cv2.resize(image, dsize(output image size), x scale, y scale, interpolation)</p>
<p>In [7]:</p>
<pre><code><span class="hljs-attribute">image</span> = cv<span class="hljs-number">2</span>.imread('/kaggle/input/opencv-samples-images/data/fruits.jpg') image = cv<span class="hljs-number">2</span>.cvtColor(image, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Original"</span>) plt.imshow(image) # Let's make our image <span class="hljs-number">3</span>/<span class="hljs-number">4</span> of it's original size image_scaled = cv<span class="hljs-number">2</span>.resize(image, None, fx=<span class="hljs-number">0</span>.<span class="hljs-number">75</span>, fy=<span class="hljs-number">0</span>.<span class="hljs-number">75</span>) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title(<span class="hljs-string">"Scaling - Linear Interpolation"</span>) plt.imshow(image_scaled) # Let's double the size of our image img_scaled = cv<span class="hljs-number">2</span>.resize(image, None, fx=<span class="hljs-number">2</span>, fy=<span class="hljs-number">2</span>, interpolation = cv<span class="hljs-number">2</span>.INTER_CUBIC) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>) plt.title(<span class="hljs-string">"Scaling - Cubic Interpolation"</span>) plt.imshow(img_scaled) # Let's skew the re-sizing by setting exact dimensions img_scaled = cv<span class="hljs-number">2</span>.resize(image, (<span class="hljs-number">900</span>, <span class="hljs-number">400</span>), interpolation = cv<span class="hljs-number">2</span>.INTER_AREA) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">4</span>) plt.title(<span class="hljs-string">"Scaling - Skewed Size"</span>) plt.imshow(img_scaled) 
</code></pre><p>Out[7]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa9374055c0</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___14_1.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#7.Image-Pyramids"></a></p>
<p>Useful when scaling images in object detection.</p>
<p>In [8]:</p>
<pre><code><span class="hljs-attribute">image</span> = cv<span class="hljs-number">2</span>.imread('/kaggle/input/opencv-samples-images/data/butterfly.jpg') image = cv<span class="hljs-number">2</span>.cvtColor(image, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Original"</span>) plt.imshow(image) smaller = cv<span class="hljs-number">2</span>.pyrDown(image) larger = cv<span class="hljs-number">2</span>.pyrUp(smaller) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title(<span class="hljs-string">"Smaller"</span>) plt.imshow(smaller) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>) plt.title(<span class="hljs-string">"Larger"</span>) plt.imshow(larger) 
</code></pre><p>Out[8]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa925e03710</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___16_1.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#8.Cropping"></a></p>
<p>In [9]:</p>
<pre><code><span class="hljs-attribute">image</span> = cv<span class="hljs-number">2</span>.imread('/kaggle/input/opencv-samples-images/data/messi<span class="hljs-number">5</span>.jpg') image = cv<span class="hljs-number">2</span>.cvtColor(image, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Original"</span>) plt.imshow(image) height, width = image.shape[:<span class="hljs-number">2</span>] # Let's get the starting pixel coordiantes (top left of cropping rectangle) start_row, start_col = int(height * .<span class="hljs-number">25</span>), int(width * .<span class="hljs-number">25</span>) # Let's get the ending pixel coordinates (bottom right) end_row, end_col = int(height * .<span class="hljs-number">75</span>), int(width * .<span class="hljs-number">75</span>) # Simply use indexing to crop out the rectangle we desire cropped = image[start_row:end_row , start_col:end_col] plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title(<span class="hljs-string">"Cropped"</span>) plt.imshow(cropped) 
</code></pre><p>Out[9]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa925d6c0b8</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___18_1.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#9.Blurring"></a></p>
<p>In [10]:</p>
<pre><code><span class="hljs-attribute">image</span> = cv<span class="hljs-number">2</span>.imread('/kaggle/input/opencv-samples-images/data/home.jpg') image = cv<span class="hljs-number">2</span>.cvtColor(image, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Original"</span>) plt.imshow(image) # Creating our <span class="hljs-number">3</span> x <span class="hljs-number">3</span> kernel kernel_<span class="hljs-number">3</span>x<span class="hljs-number">3</span> = np.ones((<span class="hljs-number">3</span>, <span class="hljs-number">3</span>), np.float<span class="hljs-number">32</span>) / <span class="hljs-number">9</span> # We use the cv<span class="hljs-number">2</span>.fitler<span class="hljs-number">2</span>D to conovlve the kernal with an image blurred = cv<span class="hljs-number">2</span>.filter<span class="hljs-number">2</span>D(image, -<span class="hljs-number">1</span>, kernel_<span class="hljs-number">3</span>x<span class="hljs-number">3</span>) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title(<span class="hljs-string">"3x3 Kernel Blurring"</span>) plt.imshow(blurred) # Creating our <span class="hljs-number">7</span> x <span class="hljs-number">7</span> kernel kernel_<span class="hljs-number">7</span>x<span class="hljs-number">7</span> = np.ones((<span class="hljs-number">7</span>, <span class="hljs-number">7</span>), np.float<span class="hljs-number">32</span>) / <span class="hljs-number">49</span> blurred<span class="hljs-number">2</span> = cv<span class="hljs-number">2</span>.filter<span class="hljs-number">2</span>D(image, -<span class="hljs-number">1</span>, kernel_<span class="hljs-number">7</span>x<span class="hljs-number">7</span>) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>) plt.title(<span class="hljs-string">"7x7 Kernel Blurring"</span>) plt.imshow(blurred<span class="hljs-number">2</span>) 
</code></pre><p>Out[10]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa925cab128</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___20_1.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#10.Contours"></a></p>
<p>In [11]:</p>
<pre><code># Let<span class="hljs-symbol">'s</span> load a simple image with <span class="hljs-number">3</span> black squares image = cv2.imread('/kaggle/input/opencv-samples-images/data/pic3.png') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Original"</span>) plt.imshow(image) # Grayscale gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) # Find Canny edges edged = cv2.Canny(gray, <span class="hljs-number">30</span>, <span class="hljs-number">200</span>) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title(<span class="hljs-string">"Canny Edges"</span>) plt.imshow(edged) # Finding Contours # Use a copy of your image e.g. edged.copy(), since findContours alters the image contours, hierarchy = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>) plt.title(<span class="hljs-string">"Canny Edges After Contouring"</span>) plt.imshow(edged) print(<span class="hljs-string">"Number of Contours found = "</span> + <span class="hljs-built_in">str</span>(len(contours))) # Draw all contours # Use '-<span class="hljs-number">1</span>' <span class="hljs-keyword">as</span> the <span class="hljs-number">3</span>rd parameter to draw all cv2.drawContours(image, contours, -<span class="hljs-number">1</span>, (<span class="hljs-number">0</span>,<span class="hljs-number">255</span>,<span class="hljs-number">0</span>), <span class="hljs-number">3</span>) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">4</span>) plt.title(<span class="hljs-string">"Contours"</span>) plt.imshow(image) 

Number of Contours found = <span class="hljs-number">4</span> 
</code></pre><p>Out[11]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa925b185c0</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___22_2.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#11.Approximating-Contours-and-Convex-Hull"></a></p>
<p>cv2.approxPolyDP(contour, Approximation Accuracy, Closed)</p>
<ul>
<li>contour -- is the individual contour we wish to approximate</li>
<li>Approximation Accuracy -- Important parameter is determining the accuracy of the approximation. Small values give precise- approximations, large values give more generic approximation. A good rule of thumb is less than 5% of the contour perimeter</li>
<li>Closed -- a Boolean value that states whether the approximate contour should be open or closed</li>
</ul>
<p>In [12]:</p>
<pre><code># <span class="hljs-keyword">Load</span> image <span class="hljs-keyword">and</span> keep a <span class="hljs-keyword">copy</span> image = cv2.imread(<span class="hljs-string">'/kaggle/input/opencv-samples-images/house.jpg'</span>) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title("Original") plt.imshow(image) orig_image = image.<span class="hljs-keyword">copy</span>() # Grayscale <span class="hljs-keyword">and</span> binarize gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) ret, thresh = cv2.threshold(gray, <span class="hljs-number">127</span>, <span class="hljs-number">255</span>, cv2.THRESH_BINARY_INV) # Find contours contours, hierarchy = cv2.findContours(thresh.<span class="hljs-keyword">copy</span>(), cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE) # Iterate through <span class="hljs-keyword">each</span> contour <span class="hljs-keyword">and</span> compute the bounding rectangle <span class="hljs-keyword">for</span> c <span class="hljs-keyword">in</span> contours: x,y,w,h = cv2.boundingRect(c) cv2.rectangle(orig_image,(x,y),(x+w,y+h),(<span class="hljs-number">0</span>,<span class="hljs-number">0</span>,<span class="hljs-number">255</span>),<span class="hljs-number">2</span>) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title("Bounding Rectangle") plt.imshow(orig_image) cv2.waitKey(<span class="hljs-number">0</span>) # Iterate through <span class="hljs-keyword">each</span> contour <span class="hljs-keyword">and</span> compute the approx contour <span class="hljs-keyword">for</span> c <span class="hljs-keyword">in</span> contours: # Calculate accuracy <span class="hljs-keyword">as</span> a percent <span class="hljs-keyword">of</span> the contour perimeter accuracy = <span class="hljs-number">0.03</span> * cv2.arcLength(c, <span class="hljs-keyword">True</span>) approx = cv2.approxPolyDP(c, accuracy, <span class="hljs-keyword">True</span>) cv2.drawContours(image, [approx], <span class="hljs-number">0</span>, (<span class="hljs-number">0</span>, <span class="hljs-number">255</span>, <span class="hljs-number">0</span>), <span class="hljs-number">2</span>) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>) plt.title("Approx Poly DP") plt.imshow(image) plt.<span class="hljs-keyword">show</span>() # Convex Hull image = cv2.imread(<span class="hljs-string">'/kaggle/input/opencv-samples-images/hand.jpg'</span>) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) plt.subplot(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title("Original Image") plt.imshow(image) # Threshold the image ret, thresh = cv2.threshold(gray, <span class="hljs-number">176</span>, <span class="hljs-number">255</span>, <span class="hljs-number">0</span>) # Find contours contours, hierarchy = cv2.findContours(thresh.<span class="hljs-keyword">copy</span>(), cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE) # Sort Contors <span class="hljs-keyword">by</span> area <span class="hljs-keyword">and</span> <span class="hljs-keyword">then</span> remove the largest frame contour n = len(contours) - <span class="hljs-number">1</span> contours = sorted(contours, key=cv2.contourArea, <span class="hljs-keyword">reverse</span>=<span class="hljs-keyword">False</span>)[:n] # Iterate through contours <span class="hljs-keyword">and</span> draw the convex hull <span class="hljs-keyword">for</span> c <span class="hljs-keyword">in</span> contours: hull = cv2.convexHull(c) cv2.drawContours(image, [hull], <span class="hljs-number">0</span>, (<span class="hljs-number">0</span>, <span class="hljs-number">255</span>, <span class="hljs-number">0</span>), <span class="hljs-number">2</span>) plt.subplot(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title("Convex Hull") plt.imshow(image) 

/opt/conda/lib/python3<span class="hljs-number">.6</span>/site-packages/matplotlib/figure.py:<span class="hljs-number">98</span>: MatplotlibDeprecationWarning: Adding an axes <span class="hljs-keyword">using</span> the same arguments <span class="hljs-keyword">as</span> a previous axes currently reuses the earlier instance. <span class="hljs-keyword">In</span> a future <span class="hljs-keyword">version</span>, a <span class="hljs-built_in">new</span> instance will <span class="hljs-keyword">always</span> be created <span class="hljs-keyword">and</span> returned. Meanwhile, this <span class="hljs-built_in">warning</span> can be suppressed, <span class="hljs-keyword">and</span> the future behavior ensured, <span class="hljs-keyword">by</span> <span class="hljs-keyword">passing</span> a <span class="hljs-keyword">unique</span> label <span class="hljs-keyword">to</span> <span class="hljs-keyword">each</span> axes instance. "Adding an axes using the same arguments as a previous axes " 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___24_1.png" alt /></p>
<p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___24_2.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#12.Identifiy-Contours-by-Shape"></a></p>
<p>In [13]:</p>
<pre><code><span class="hljs-attribute">image</span> = cv<span class="hljs-number">2</span>.imread('/kaggle/input/opencv-samples-images/someshapes.jpg') image = cv<span class="hljs-number">2</span>.cvtColor(image, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB) gray = cv<span class="hljs-number">2</span>.cvtColor(image, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>GRAY) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Original"</span>) plt.imshow(image) ret, thresh = cv<span class="hljs-number">2</span>.threshold(gray, <span class="hljs-number">127</span>, <span class="hljs-number">255</span>, <span class="hljs-number">1</span>) # Extract Contours contours, hierarchy = cv<span class="hljs-number">2</span>.findContours(thresh.copy(), cv<span class="hljs-number">2</span>.RETR_LIST, cv<span class="hljs-number">2</span>.CHAIN_APPROX_NONE) for cnt in contours: # Get approximate polygons approx = cv<span class="hljs-number">2</span>.approxPolyDP(cnt, <span class="hljs-number">0</span>.<span class="hljs-number">01</span>*cv<span class="hljs-number">2</span>.arcLength(cnt,True),True) if len(approx) == <span class="hljs-number">3</span>: shape_name = <span class="hljs-string">"Triangle"</span> cv<span class="hljs-number">2</span>.drawContours(image,[cnt],<span class="hljs-number">0</span>,(<span class="hljs-number">0</span>,<span class="hljs-number">255</span>,<span class="hljs-number">0</span>),-<span class="hljs-number">1</span>) # Find contour center to place text at the center M = cv<span class="hljs-number">2</span>.moments(cnt) cx = int(M['m<span class="hljs-number">10</span>'] / M['m<span class="hljs-number">00</span>']) cy = int(M['m<span class="hljs-number">01</span>'] / M['m<span class="hljs-number">00</span>']) cv<span class="hljs-number">2</span>.putText(image, shape_name, (cx-<span class="hljs-number">50</span>, cy), cv<span class="hljs-number">2</span>.FONT_HERSHEY_SIMPLEX, <span class="hljs-number">1</span>, (<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>), <span class="hljs-number">2</span>) elif len(approx) == <span class="hljs-number">4</span>: x,y,w,h = cv<span class="hljs-number">2</span>.boundingRect(cnt) M = cv<span class="hljs-number">2</span>.moments(cnt) cx = int(M['m<span class="hljs-number">10</span>'] / M['m<span class="hljs-number">00</span>']) cy = int(M['m<span class="hljs-number">01</span>'] / M['m<span class="hljs-number">00</span>']) # Check to see if <span class="hljs-number">4</span>-side polygon is square or rectangle # cv<span class="hljs-number">2</span>.boundingRect returns the top left and then width and if abs(w-h) &lt;= <span class="hljs-number">3</span>: shape_name = <span class="hljs-string">"Square"</span> # Find contour center to place text at the center cv<span class="hljs-number">2</span>.drawContours(image,<span class="hljs-meta"> [cnt], 0, (0, 125 ,255), -1) cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 2) else: shape_name = "Rectangle" # Find contour center to place text at the center cv2.drawContours(image, [cnt], 0, (0, 0, 255), -1) M = cv2.moments(cnt) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 2) elif len(approx) == 10: shape_name = "Star" cv2.drawContours(image, [cnt], 0, (255, 255, 0), -1) M = cv2.moments(cnt) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 2) elif len(approx) &gt;= 15: shape_name = "Circle" cv2.drawContours(image, [cnt], 0, (0, 255, 255), -1) M = cv2.moments(cnt) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 2) plt.subplot(2, 2, 2) plt.title("Identifying Shapes") plt.imshow(image) </span>
</code></pre><p>Out[13]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa9257c8470</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___26_1.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#13.Line-Detection---Using-Hough-Lines"></a></p>
<p>cv2.HoughLines(binarized/thresholded image, 𝜌 accuracy, 𝜃 accuracy, threshold)</p>
<ul>
<li>Threshold here is the minimum vote for it to be considered a line</li>
</ul>
<p>In [14]:</p>
<pre><code><span class="hljs-attribute">image</span> = cv<span class="hljs-number">2</span>.imread('/kaggle/input/opencv-samples-images/data/sudoku.png') image = cv<span class="hljs-number">2</span>.cvtColor(image, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) # Grayscale and Canny Edges extracted gray = cv<span class="hljs-number">2</span>.cvtColor(image, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>GRAY) edges = cv<span class="hljs-number">2</span>.Canny(gray, <span class="hljs-number">100</span>, <span class="hljs-number">170</span>, apertureSize = <span class="hljs-number">3</span>) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"edges"</span>) plt.imshow(edges) # Run HoughLines using a rho accuracy of <span class="hljs-number">1</span> pixel # theta accuracy of np.pi / <span class="hljs-number">180</span> which is <span class="hljs-number">1</span> degree # Our line threshold is set to <span class="hljs-number">240</span> (number of points <span class="hljs-literal">on</span> line) lines = cv<span class="hljs-number">2</span>.HoughLines(edges, <span class="hljs-number">1</span>, np.pi/<span class="hljs-number">180</span>, <span class="hljs-number">200</span>) # We iterate through each line and convert it to the format # required by cv.lines (i.e. requiring end points) for line in lines: rho, theta = line[<span class="hljs-number">0</span>] a = np.cos(theta) b = np.sin(theta) x<span class="hljs-number">0</span> = a * rho y<span class="hljs-number">0</span> = b * rho x<span class="hljs-number">1</span> = int(x<span class="hljs-number">0</span> + <span class="hljs-number">1000</span> * (-b)) y<span class="hljs-number">1</span> = int(y<span class="hljs-number">0</span> + <span class="hljs-number">1000</span> * (a)) x<span class="hljs-number">2</span> = int(x<span class="hljs-number">0</span> - <span class="hljs-number">1000</span> * (-b)) y<span class="hljs-number">2</span> = int(y<span class="hljs-number">0</span> - <span class="hljs-number">1000</span> * (a)) cv<span class="hljs-number">2</span>.line(image, (x<span class="hljs-number">1</span>, y<span class="hljs-number">1</span>), (x<span class="hljs-number">2</span>, y<span class="hljs-number">2</span>), (<span class="hljs-number">255</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>), <span class="hljs-number">2</span>) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title(<span class="hljs-string">"Hough Lines"</span>) plt.imshow(image) 
</code></pre><p>Out[14]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa925768860</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___28_1.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#14.Counting-Circles-and-Ellipses"></a></p>
<p>In [15]:</p>
<pre><code><span class="hljs-attribute">image</span> = cv<span class="hljs-number">2</span>.imread('/kaggle/input/opencv-samples-images/blobs.jpg') image = cv<span class="hljs-number">2</span>.cvtColor(image, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB) plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) # Intialize the detector using the default parameters detector = cv<span class="hljs-number">2</span>.SimpleBlobDetector_create() # Detect blobs keypoints = detector.detect(image) # Draw blobs <span class="hljs-literal">on</span> our image as red circles blank = np.zeros((<span class="hljs-number">1</span>,<span class="hljs-number">1</span>)) blobs = cv<span class="hljs-number">2</span>.drawKeypoints(image, keypoints, blank, (<span class="hljs-number">0</span>,<span class="hljs-number">0</span>,<span class="hljs-number">255</span>), cv<span class="hljs-number">2</span>.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) number_of_blobs = len(keypoints) text = <span class="hljs-string">"Total Number of Blobs: "</span> + str(len(keypoints)) cv<span class="hljs-number">2</span>.putText(blobs, text, (<span class="hljs-number">20</span>, <span class="hljs-number">550</span>), cv<span class="hljs-number">2</span>.FONT_HERSHEY_SIMPLEX, <span class="hljs-number">1</span>, (<span class="hljs-number">100</span>, <span class="hljs-number">0</span>, <span class="hljs-number">255</span>), <span class="hljs-number">2</span>) # Display image with blob keypoints plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Blobs using default parameters"</span>) plt.imshow(blobs) # Set our filtering parameters # Initialize parameter settiing using cv<span class="hljs-number">2</span>.SimpleBlobDetector params = cv<span class="hljs-number">2</span>.SimpleBlobDetector_Params() # Set Area filtering parameters params.filterByArea = True params.minArea = <span class="hljs-number">100</span> # Set Circularity filtering parameters params.filterByCircularity = True params.minCircularity = <span class="hljs-number">0</span>.<span class="hljs-number">9</span> # Set Convexity filtering parameters params.filterByConvexity = False params.minConvexity = <span class="hljs-number">0</span>.<span class="hljs-number">2</span> # Set inertia filtering parameters params.filterByInertia = True params.minInertiaRatio = <span class="hljs-number">0</span>.<span class="hljs-number">01</span> # Create a detector with the parameters detector = cv<span class="hljs-number">2</span>.SimpleBlobDetector_create(params) # Detect blobs keypoints = detector.detect(image) # Draw blobs <span class="hljs-literal">on</span> our image as red circles blank = np.zeros((<span class="hljs-number">1</span>,<span class="hljs-number">1</span>)) blobs = cv<span class="hljs-number">2</span>.drawKeypoints(image, keypoints, blank, (<span class="hljs-number">0</span>,<span class="hljs-number">255</span>,<span class="hljs-number">0</span>), cv<span class="hljs-number">2</span>.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) number_of_blobs = len(keypoints) text = <span class="hljs-string">"Number of Circular Blobs: "</span> + str(len(keypoints)) cv<span class="hljs-number">2</span>.putText(blobs, text, (<span class="hljs-number">20</span>, <span class="hljs-number">550</span>), cv<span class="hljs-number">2</span>.FONT_HERSHEY_SIMPLEX, <span class="hljs-number">1</span>, (<span class="hljs-number">0</span>, <span class="hljs-number">100</span>, <span class="hljs-number">255</span>), <span class="hljs-number">2</span>) # Show blobs plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title(<span class="hljs-string">"Filtering Circular Blobs Only"</span>) plt.imshow(blobs) 
</code></pre><p>Out[15]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa92569a6d8</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___30_1.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#15.Finding-Corners"></a></p>
<p>In [16]:</p>
<pre><code># Load image then grayscale image = cv2.imread(<span class="hljs-string">'/kaggle/input/opencv-samples-images/data/chessboard.png'</span>) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) plt.figure(figsize=(<span class="hljs-number">10</span>, <span class="hljs-number">10</span>)) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # The cornerHarris <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">requires</span> <span class="hljs-title">the</span> <span class="hljs-title">array</span> <span class="hljs-title">datatype</span> <span class="hljs-title">to</span> <span class="hljs-title">be</span> <span class="hljs-title">float32</span> <span class="hljs-title">gray</span> = <span class="hljs-title">np</span>.<span class="hljs-title">float32</span>(<span class="hljs-params">gray</span>) <span class="hljs-title">harris_corners</span> = <span class="hljs-title">cv2</span>.<span class="hljs-title">cornerHarris</span>(<span class="hljs-params">gray, 3, 3, 0.05</span>) #<span class="hljs-title">We</span> <span class="hljs-title">use</span> <span class="hljs-title">dilation</span> <span class="hljs-title">of</span> <span class="hljs-title">the</span> <span class="hljs-title">corner</span> <span class="hljs-title">points</span> <span class="hljs-title">to</span> <span class="hljs-title">enlarge</span> <span class="hljs-title">them</span>\ <span class="hljs-title">kernel</span> = <span class="hljs-title">np</span>.<span class="hljs-title">ones</span>(<span class="hljs-params">(<span class="hljs-number">7</span>,<span class="hljs-number">7</span>),np.uint8</span>) <span class="hljs-title">harris_corners</span> = <span class="hljs-title">cv2</span>.<span class="hljs-title">dilate</span>(<span class="hljs-params">harris_corners, kernel, iterations = 10</span>) # <span class="hljs-title">Threshold</span> <span class="hljs-title">for</span> <span class="hljs-title">an</span> <span class="hljs-title">optimal</span> <span class="hljs-title">value</span>, <span class="hljs-title">it</span> <span class="hljs-title">may</span> <span class="hljs-title">vary</span> <span class="hljs-title">depending</span> <span class="hljs-title">on</span> <span class="hljs-title">the</span> <span class="hljs-title">image</span>. <span class="hljs-title">image</span>[<span class="hljs-title">harris_corners</span> &gt; 0.025 * <span class="hljs-title">harris_corners</span>.<span class="hljs-title">max</span>(<span class="hljs-params"></span>) ] = [255, 127, 127] <span class="hljs-title">plt</span>.<span class="hljs-title">subplot</span>(<span class="hljs-params">1, 1, 1</span>) <span class="hljs-title">plt</span>.<span class="hljs-title">title</span>(<span class="hljs-params">"Harris Corners"</span>) <span class="hljs-title">plt</span>.<span class="hljs-title">imshow</span>(<span class="hljs-params">image</span>) </span>
</code></pre><p>Out[16]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa925618dd8</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___32_1.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#16.Finding-Waldo"></a></p>
<p>In [17]:</p>
<pre><code># <span class="hljs-keyword">Load</span> <span class="hljs-keyword">input</span> image <span class="hljs-keyword">and</span> convert <span class="hljs-keyword">to</span> grayscale image = cv2.imread(<span class="hljs-string">'/kaggle/input/opencv-samples-images/WaldoBeach.jpg'</span>) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) plt.figure(figsize=(<span class="hljs-number">30</span>, <span class="hljs-number">30</span>)) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title("Where is Waldo?") plt.imshow(image) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # <span class="hljs-keyword">Load</span> <span class="hljs-keyword">Template</span> image template = cv2.imread(<span class="hljs-string">'/kaggle/input/opencv-samples-images/waldo.jpg'</span>,<span class="hljs-number">0</span>) result = cv2.matchTemplate(gray, <span class="hljs-keyword">template</span>, cv2.TM_CCOEFF) min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result) #<span class="hljs-keyword">Create</span> Bounding <span class="hljs-type">Box</span> top_left = max_loc bottom_right = (top_left[<span class="hljs-number">0</span>] + <span class="hljs-number">50</span>, top_left[<span class="hljs-number">1</span>] + <span class="hljs-number">50</span>) cv2.rectangle(image, top_left, bottom_right, (<span class="hljs-number">0</span>,<span class="hljs-number">0</span>,<span class="hljs-number">255</span>), <span class="hljs-number">5</span>) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title("Waldo") plt.imshow(image) 
</code></pre><p>Out[17]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa9255abc88</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___34_1.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#17.Background-Subtraction-Methods"></a></p>
<p>source: <a target="_blank" href="https://docs.opencv.org/3.4/d1/dc5/tutorial_background_subtraction.html">https://docs.opencv.org/3.4/d1/dc5/tutorial_background_subtraction.html</a></p>
<h2 id="how-to-use-background-subtraction-methodshttpswwwkagglecombulentsiyahlearn-opencv-by-examples-with-pythonhow-to-use-background-subtraction-methods">How to Use Background Subtraction Methods<a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#How-to-Use-Background-Subtraction-Methods"></a></h2>
<p>Background subtraction (BS) is a common and widely used technique for generating a foreground mask (namely, a binary image containing the pixels belonging to moving objects in the scene) by using static cameras.</p>
<p>As the name suggests, BS calculates the foreground mask performing a subtraction between the current frame and a background model, containing the static part of the scene or, more in general, everything that can be considered as background given the characteristics of the observed scene.</p>
<p><img src="https://docs.opencv.org/3.4/Background_Subtraction_Tutorial_Scheme.png" alt /></p>
<p>In [18]:</p>
<pre><code><span class="hljs-attribute">import</span> cv<span class="hljs-number">2</span> import matplotlib.pyplot as plt algo = 'MOG<span class="hljs-number">2</span>' if algo == 'MOG<span class="hljs-number">2</span>': backSub = cv<span class="hljs-number">2</span>.createBackgroundSubtractorMOG<span class="hljs-number">2</span>() else: backSub = cv<span class="hljs-number">2</span>.createBackgroundSubtractorKNN() plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) frame = cv<span class="hljs-number">2</span>.imread('/kaggle/input/opencv-samples-images/Background_Subtraction_Tutorial_frame.png') fgMask = backSub.apply(frame) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Frame"</span>) plt.imshow(cv<span class="hljs-number">2</span>.cvtColor(frame, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB)) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>) plt.title(<span class="hljs-string">"FG Mask"</span>) plt.imshow(cv<span class="hljs-number">2</span>.cvtColor(fgMask, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB)) frame = cv<span class="hljs-number">2</span>.imread('/kaggle/input/opencv-samples-images/Background_Subtraction_Tutorial_frame_<span class="hljs-number">1</span>.png') fgMask = backSub.apply(frame) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>) plt.title(<span class="hljs-string">"Frame"</span>) plt.imshow(cv<span class="hljs-number">2</span>.cvtColor(frame, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB)) plt.subplot(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>, <span class="hljs-number">4</span>) plt.title(<span class="hljs-string">"FG Mask"</span>) plt.imshow(cv<span class="hljs-number">2</span>.cvtColor(fgMask, cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB)) 
</code></pre><p>Out[18]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa9254bea20</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___37_1.png" alt /></p>
<h2 id="if-you-want-to-run-it-on-video-and-locally-you-must-set-it-to-while-true-do-not-try-on-kaggle-you-will-get-the-errorhttpswwwkagglecombulentsiyahlearn-opencv-by-examples-with-pythonif-you-want-to-run-it-on-video-and-locally-you-must-set-it-to-while-true-do-not-try-on-kaggle-you-will-get-the-error">If you want to run it on video and locally, you must set it to (While) True. (Do not try on Kaggle you will get the error)<a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#If-you-want-to-run-it-on-video-and-locally,-you-must-set-it-to-(While"></a>-True.-(Do-not-try-on-Kaggle-you-will-get-the-error))</h2>
<p>In [19]:</p>
<pre><code><span class="hljs-keyword">import</span> cv2 <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np algo = <span class="hljs-string">'MOG2'</span> inputt = <span class="hljs-string">'/kaggle/input/opencv-samples-images/video_input/Background_Subtraction_Tutorial_frame.mp4'</span> capture = cv2.VideoCapture(cv2.samples.findFileOrKeep(inputt)) frame_width = <span class="hljs-type">int</span>(capture.<span class="hljs-keyword">get</span>(<span class="hljs-number">3</span>)) frame_height = <span class="hljs-type">int</span>(capture.<span class="hljs-keyword">get</span>(<span class="hljs-number">4</span>)) <span class="hljs-keyword">out</span> = cv2.VideoWriter(<span class="hljs-string">'Background_Subtraction_Tutorial_frame_output.mp4'</span>,cv2.VideoWriter_fourcc(<span class="hljs-string">'M'</span>,<span class="hljs-string">'J'</span>,<span class="hljs-string">'P'</span>,<span class="hljs-string">'G'</span>),<span class="hljs-number">30</span>, (frame_width,frame_height)) <span class="hljs-keyword">if</span> algo == <span class="hljs-string">'MOG2'</span>: backSub = cv2.createBackgroundSubtractorMOG2() <span class="hljs-keyword">else</span>: backSub = cv2.createBackgroundSubtractorKNN() # <span class="hljs-keyword">If</span> you want <span class="hljs-keyword">to</span> run it <span class="hljs-keyword">on</span> video <span class="hljs-keyword">and</span> locally, you must <span class="hljs-keyword">set</span> it <span class="hljs-keyword">to</span> (<span class="hljs-keyword">While</span>) <span class="hljs-keyword">True</span>. (<span class="hljs-keyword">Do</span> <span class="hljs-keyword">not</span> try <span class="hljs-keyword">on</span> Kaggle you will <span class="hljs-keyword">get</span> the error) <span class="hljs-keyword">while</span> <span class="hljs-keyword">False</span>: ret, frame = capture.<span class="hljs-keyword">read</span>() <span class="hljs-keyword">if</span> frame <span class="hljs-keyword">is</span> <span class="hljs-keyword">None</span>: break fgMask = backSub.apply(frame) cv2.rectangle(frame, (<span class="hljs-number">10</span>, <span class="hljs-number">2</span>), (<span class="hljs-number">100</span>,<span class="hljs-number">20</span>), (<span class="hljs-number">255</span>,<span class="hljs-number">255</span>,<span class="hljs-number">255</span>), <span class="hljs-number">-1</span>) cv2.imshow(<span class="hljs-string">'Frame'</span>, frame) cv2.imshow(<span class="hljs-string">'FG Mask'</span>, fgMask) <span class="hljs-keyword">out</span>.<span class="hljs-keyword">write</span>(cv2.cvtColor(fgMask, cv2.COLOR_BGR2RGB)) keyboard = cv2.waitKey(<span class="hljs-number">1</span>) &amp; <span class="hljs-number">0xFF</span>; <span class="hljs-keyword">if</span> (keyboard == <span class="hljs-number">27</span> <span class="hljs-keyword">or</span> keyboard == ord(<span class="hljs-string">'q'</span>)): cv2.destroyAllWindows() break; capture.<span class="hljs-keyword">release</span>() <span class="hljs-keyword">out</span>.<span class="hljs-keyword">release</span>() cv2.destroyAllWindows() 
</code></pre><h2 id="the-result-you-will-get-on-video-and-locallyhttpswwwkagglecombulentsiyahlearn-opencv-by-examples-with-pythonthe-result-you-will-get-on-video-and-locally">The result you will get on video and locally<a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#The-result-you-will-get-on-video-and-locally"></a></h2>
<p><img src="https://iili.io/JMXhdv.gif" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#18.Funny-Mirrors-Using-OpenCV"></a></p>
<p>Source: <a target="_blank" href="https://www.learnopencv.com/funny-mirrors-using-opencv/">https://www.learnopencv.com/funny-mirrors-using-opencv/</a></p>
<p>Funny mirrors are not plane mirrors but a combination of convex/concave reflective surfaces that produce distortion effects that look funny as we move in front of these mirrors.</p>
<h3 id="how-does-it-work-httpswwwkagglecombulentsiyahlearn-opencv-by-examples-with-pythonhow-does-it-work">How does it work ?<a target="_blank" href="https://www.kaggle.com/bulentsiyah/learn-opencv-by-examples-with-python#How-does-it-work-?"></a></h3>
<p>The entire project can be divided into three major steps :</p>
<ul>
<li>Create a virtual camera.</li>
<li>Define a 3D surface (the mirror surface) and project it into the virtual camera using a suitable value of projection matrix.</li>
<li>Use the image coordinates of the projected points of the 3D surface to apply mesh based warping to get the desired effect of a funny mirror.</li>
</ul>
<p><img src="https://www.learnopencv.com/wp-content/uploads/2020/04/steps-for-funny-mirrors.jpg" alt /></p>
<p>In [20]:</p>
<pre><code>!pip install vcam 

Collecting vcam Downloading https://files.pythonhosted.org/packages/<span class="hljs-number">5</span>a/<span class="hljs-number">81</span>/<span class="hljs-number">31e561</span>c9e2be275df47e313786932ce8e176f29616b65c19a1ef23ccaa3b/vcam<span class="hljs-number">-1.0</span>-py3-<span class="hljs-keyword">none</span>-<span class="hljs-keyword">any</span>.whl Installing collected packages: vcam Successfully installed vcam<span class="hljs-number">-1.0</span> 
</code></pre><p>In [21]:</p>
<pre><code><span class="hljs-attribute">import</span> cv<span class="hljs-number">2</span> import numpy as np import math from vcam import vcam,meshGen import matplotlib.pyplot as plt plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) # Reading the input image. Pass the path of image you would like to use as input image. img = cv<span class="hljs-number">2</span>.imread(<span class="hljs-string">"/kaggle/input/opencv-samples-images/minions.jpg"</span>) H,W = img.shape[:<span class="hljs-number">2</span>] # Creating the virtual camera object c<span class="hljs-number">1</span> = vcam(H=H,W=W) # Creating the surface object plane = meshGen(H,W) # We generate a mirror where for each <span class="hljs-number">3</span>D point, its Z coordinate is defined as Z = <span class="hljs-number">20</span>*exp^((x/w)^<span class="hljs-number">2</span> / <span class="hljs-number">2</span>*<span class="hljs-number">0</span>.<span class="hljs-number">1</span>*sqrt(<span class="hljs-number">2</span>*pi)) plane.Z += <span class="hljs-number">20</span>*np.exp(-<span class="hljs-number">0</span>.<span class="hljs-number">5</span>*((plane.X*<span class="hljs-number">1</span>.<span class="hljs-number">0</span>/plane.W)/<span class="hljs-number">0</span>.<span class="hljs-number">1</span>)**<span class="hljs-number">2</span>)/(<span class="hljs-number">0</span>.<span class="hljs-number">1</span>*np.sqrt(<span class="hljs-number">2</span>*np.pi)) pts<span class="hljs-number">3</span>d = plane.getPlane() pts<span class="hljs-number">2</span>d = c<span class="hljs-number">1</span>.project(pts<span class="hljs-number">3</span>d) map_x,map_y = c<span class="hljs-number">1</span>.getMaps(pts<span class="hljs-number">2</span>d) output = cv<span class="hljs-number">2</span>.remap(img,map_x,map_y,interpolation=cv<span class="hljs-number">2</span>.INTER_LINEAR) plt.subplot(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>,<span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Funny Mirror"</span>) plt.imshow(cv<span class="hljs-number">2</span>.cvtColor(np.hstack((img,output)), cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB)) 
</code></pre><p>Out[21]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa9259626a0</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___43_1.png" alt /></p>
<p>So now as we know that by defining Z as a function of X and Y we can create different types of distortion effects. Let us create some more effects using the above code. We simply need to change the line where we define Z as a function of X and Y. This will further help you to create your own effects.</p>
<p>In [22]:</p>
<pre><code><span class="hljs-attribute">plt</span>.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) # Reading the input image. Pass the path of image you would like to use as input image. img = cv<span class="hljs-number">2</span>.imread(<span class="hljs-string">"/kaggle/input/opencv-samples-images/minions.jpg"</span>) H,W = img.shape[:<span class="hljs-number">2</span>] # Creating the virtual camera object c<span class="hljs-number">1</span> = vcam(H=H,W=W) # Creating the surface object plane = meshGen(H,W) # We generate a mirror where for each <span class="hljs-number">3</span>D point, its Z coordinate is defined as Z = <span class="hljs-number">20</span>*exp^((y/h)^<span class="hljs-number">2</span> / <span class="hljs-number">2</span>*<span class="hljs-number">0</span>.<span class="hljs-number">1</span>*sqrt(<span class="hljs-number">2</span>*pi)) plane.Z += <span class="hljs-number">20</span>*np.exp(-<span class="hljs-number">0</span>.<span class="hljs-number">5</span>*((plane.Y*<span class="hljs-number">1</span>.<span class="hljs-number">0</span>/plane.H)/<span class="hljs-number">0</span>.<span class="hljs-number">1</span>)**<span class="hljs-number">2</span>)/(<span class="hljs-number">0</span>.<span class="hljs-number">1</span>*np.sqrt(<span class="hljs-number">2</span>*np.pi)) pts<span class="hljs-number">3</span>d = plane.getPlane() pts<span class="hljs-number">2</span>d = c<span class="hljs-number">1</span>.project(pts<span class="hljs-number">3</span>d) map_x,map_y = c<span class="hljs-number">1</span>.getMaps(pts<span class="hljs-number">2</span>d) output = cv<span class="hljs-number">2</span>.remap(img,map_x,map_y,interpolation=cv<span class="hljs-number">2</span>.INTER_LINEAR) plt.subplot(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>,<span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Funny Mirror"</span>) plt.imshow(cv<span class="hljs-number">2</span>.cvtColor(np.hstack((img,output)), cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB)) 
</code></pre><p>Out[22]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa9258bbdd8</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___45_1.png" alt /></p>
<p>Let's create something using sine function !</p>
<p>In [23]:</p>
<pre><code><span class="hljs-attribute">plt</span>.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) # Reading the input image. Pass the path of image you would like to use as input image. img = cv<span class="hljs-number">2</span>.imread(<span class="hljs-string">"/kaggle/input/opencv-samples-images/minions.jpg"</span>) H,W = img.shape[:<span class="hljs-number">2</span>] # Creating the virtual camera object c<span class="hljs-number">1</span> = vcam(H=H,W=W) # Creating the surface object plane = meshGen(H,W) # We generate a mirror where for each <span class="hljs-number">3</span>D point, its Z coordinate is defined as Z = <span class="hljs-number">20</span>*[ sin(<span class="hljs-number">2</span>*pi*(x/w-<span class="hljs-number">1</span>/<span class="hljs-number">4</span>))) + sin(<span class="hljs-number">2</span>*pi*(y/h-<span class="hljs-number">1</span>/<span class="hljs-number">4</span>))) ] plane.Z += <span class="hljs-number">20</span>*np.sin(<span class="hljs-number">2</span>*np.pi*((plane.X-plane.W/<span class="hljs-number">4</span>.<span class="hljs-number">0</span>)/plane.W)) + <span class="hljs-number">20</span>*np.sin(<span class="hljs-number">2</span>*np.pi*((plane.Y-plane.H/<span class="hljs-number">4</span>.<span class="hljs-number">0</span>)/plane.H)) pts<span class="hljs-number">3</span>d = plane.getPlane() pts<span class="hljs-number">2</span>d = c<span class="hljs-number">1</span>.project(pts<span class="hljs-number">3</span>d) map_x,map_y = c<span class="hljs-number">1</span>.getMaps(pts<span class="hljs-number">2</span>d) output = cv<span class="hljs-number">2</span>.remap(img,map_x,map_y,interpolation=cv<span class="hljs-number">2</span>.INTER_LINEAR) plt.subplot(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>,<span class="hljs-number">1</span>) plt.title(<span class="hljs-string">"Funny Mirror"</span>) plt.imshow(cv<span class="hljs-number">2</span>.cvtColor(np.hstack((img,output)), cv<span class="hljs-number">2</span>.COLOR_BGR<span class="hljs-number">2</span>RGB)) 
</code></pre><p>Out[23]:</p>
<pre><code>&lt;<span class="hljs-selector-tag">matplotlib</span><span class="hljs-selector-class">.image</span><span class="hljs-selector-class">.AxesImage</span> <span class="hljs-selector-tag">at</span> 0<span class="hljs-selector-tag">x7fa925a115f8</span>&gt;
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34321869/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..nrkCnmPn0_A7tJqlWshiEw.p3cTo2rztT4jOTODAnm_CAB6hhjc4cvhJjna8XvWLedifzk3gyRmqhrB0sOXumA3WfFU-353nTa0YGfS1wq0XzAUYrKIPxDJFZb_VDkqS1VCpdUxqOWPQ7pwDVeCCmHt4nuSDHphitfPLwZznkt6tTW2cRl1y7Gp9v_aKfP1BbpAHHyrAeKPf8pn0BhP2d2x2xLPCBxUQJ2MKz4PoyABjtjBBVjOv2s08xCGeDxH2ub6BG_LQL0VANIIGcAzqp2QD0oa3kewZoZBOH-IAfz17nBNcRaAn5fOdV8m3cVd-AgyVqYr1lolRxYrQHy_lbMEbEntFtZJhcAS8L1_4-Qh-8oaHtZXHDBr5nArGB2TzSD5jwBBPeZLqwGW4vcLwoygIwL43NhSqJQa7UgZHlsx8tpFu3St9P5eb8_P9oJDyT6u_Ux0HRPDxLFzhNcCbQlkY-72nakzuRKM-Osl9MVhwhEhEJwQwiky2NHSOrPjahQvWRlv_XjAGsUnrJ1vcSdX_sYcQ8C56NwoFYYQDwAoZFpvM8SrhDAdr044b8qhOKToVor_C3Q80pRH92dGloDTao-eorgvWe6GHJKfcrkl_X4KOkJh0zDFoZxCiIGn7M_Pxk6mB39VmbRYNENNGDlgagsJ2bUanmx9suOHD_GK-u-F7fLdR9VAjfgLIDBe82k.OaOoGdxfEmN8epKs6XSEbg/__results___files/__results___47_1.png" alt /></p>
]]></content:encoded></item><item><title><![CDATA[Machine Learning Exercise]]></title><description><![CDATA[It is the kernel that I have tried and compiled from the courses of DATAI Team (Language of the courses is Turkish: Machine Learning ve Python: A'dan Z'ye Makine Öğrenmesi), which is Grandmaster on Kaggle and has more than 15 courses on Udemy.
Conten...]]></description><link>https://www.bulentsiyah.com/machine-learning-egzersizleri-kaggle</link><guid isPermaLink="true">https://www.bulentsiyah.com/machine-learning-egzersizleri-kaggle</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[Python]]></category><category><![CDATA[Tutorial]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Mon, 13 Aug 2018 14:30:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1611498733699/Z1d--BYNW.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It is the kernel that I have tried and compiled from the courses of <a target="_blank" href="https://www.udemy.com/user/datai-team/">DATAI Team</a> (Language of the courses is Turkish: <a target="_blank" href="https://www.udemy.com/machine-learning-ve-python-adan-zye-makine-ogrenmesi-4">Machine Learning ve Python: A'dan Z'ye Makine Öğrenmesi</a>), which is <a target="_blank" href="https://www.kaggle.com/kanncaa1">Grandmaster on Kaggle</a> and has more than 15 courses on Udemy.</p>
<h1 id="contenthttpswwwkagglecombulentsiyahmachine-learning-exercisecontent"><strong>Content</strong><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Content"></a></h1>
<h2 id="regressionhttpswwwkagglecombulentsiyahmachine-learning-exerciseregression">Regression<a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Regression"></a></h2>
<ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#1.">Linear Regression</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#2.">Multiple Linear Regression</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#3.">Polynomial Linear Regression</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#4.">Support Vector Regression</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#5.">Decision Tree Regression</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#6.">Random Forest Regression</a></li>
</ul>
<p><img src="https://iili.io/J1bpse.md.png" alt /></p>
<h2 id="classificationhttpswwwkagglecombulentsiyahmachine-learning-exerciseclassification">Classification<a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Classification"></a></h2>
<ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#7.">K-Nearest Neighbour (KNN) Classification</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#8.">Support Vector Machine (SVM) Classification</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#9.">Naive Bayes Classification</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#10.">Decision Tree Classification</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#11.">Random Forest Classification</a></li>
</ul>
<p><img src="https://iili.io/J1bmX9.png" alt /></p>
<h2 id="clusteringhttpswwwkagglecombulentsiyahmachine-learning-exerciseclustering">Clustering<a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Clustering"></a></h2>
<ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#12.">K-Means Clustering</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#13.">Hierarchical Clustering</a></li>
</ul>
<p><img src="https://iili.io/J1m9qu.png" alt /></p>
<h2 id="other-contenthttpswwwkagglecombulentsiyahmachine-learning-exerciseother-content">Other Content<a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Other-Content"></a></h2>
<ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#14.">Natural Language Process (NLP)</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#15.">Principal Component Analysis (PCA)</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#16.">Model Selection</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#17.">Recommendation Systems</a></li>
</ul>
<h1 id="regressionhttpswwwkagglecombulentsiyahmachine-learning-exerciseregression">Regression<a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Regression"></a></h1>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Linear-Regression"></a></p>
<p>In [1]:</p>
<pre><code><span class="hljs-comment"># import library import pandas as pd import matplotlib.pyplot as plt import math # import data data = pd.read_csv("../input/linearregressiondataset3/linear-regression-dataset.csv") print(data.info()) print(data.head()) #print(data.describe()) </span>

<span class="hljs-section">&lt;class 'pandas.core.frame.DataFrame'&gt;</span> <span class="hljs-attribute">RangeIndex</span>: <span class="hljs-number">14</span> entries, <span class="hljs-number">0</span> to <span class="hljs-number">13</span> Data columns (total <span class="hljs-number">2</span> columns): deneyim <span class="hljs-number">14</span> non-null float<span class="hljs-number">64</span> maas <span class="hljs-number">14</span> non-null int<span class="hljs-number">64</span> dtypes: float<span class="hljs-number">64</span>(<span class="hljs-number">1</span>), int<span class="hljs-number">64</span>(<span class="hljs-number">1</span>) memory usage: <span class="hljs-number">304</span>.<span class="hljs-number">0</span> bytes None deneyim maas <span class="hljs-number">0</span> <span class="hljs-number">0</span>.<span class="hljs-number">5</span> <span class="hljs-number">2500</span> <span class="hljs-number">1</span> <span class="hljs-number">0</span>.<span class="hljs-number">0</span> <span class="hljs-number">2250</span> <span class="hljs-number">2</span> <span class="hljs-number">1</span>.<span class="hljs-number">0</span> <span class="hljs-number">2750</span> <span class="hljs-number">3</span> <span class="hljs-number">5</span>.<span class="hljs-number">0</span> <span class="hljs-number">8000</span> <span class="hljs-number">4</span> <span class="hljs-number">8</span>.<span class="hljs-number">0</span> <span class="hljs-number">9000</span> 
</code></pre><p>In [2]:</p>
<pre><code># <span class="hljs-selector-tag">plot</span> <span class="hljs-selector-tag">data</span> <span class="hljs-selector-tag">plt</span><span class="hljs-selector-class">.scatter</span>(data.deneyim,data.maas) <span class="hljs-selector-tag">plt</span><span class="hljs-selector-class">.xlabel</span>(<span class="hljs-string">"deneyim"</span>) <span class="hljs-selector-tag">plt</span><span class="hljs-selector-class">.ylabel</span>(<span class="hljs-string">"maas"</span>) <span class="hljs-selector-tag">plt</span><span class="hljs-selector-class">.show</span>() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___3_0.png" alt /></p>
<p>In [3]:</p>
<pre><code>#%% linear regression # sklearn library <span class="hljs-keyword">from</span> sklearn.linear_model <span class="hljs-keyword">import</span> LinearRegression # linear regression model linear_reg = LinearRegression() x = data.deneyim.<span class="hljs-keyword">values</span>.reshape(<span class="hljs-number">-1</span>,<span class="hljs-number">1</span>) y = data.maas.<span class="hljs-keyword">values</span>.reshape(<span class="hljs-number">-1</span>,<span class="hljs-number">1</span>) linear_reg.fit(x,y) print(<span class="hljs-string">'R sq: '</span>, linear_reg.score(x, y)) print(<span class="hljs-string">'Correlation: '</span>, math.sqrt(linear_reg.score(x, y))) 

R sq: <span class="hljs-number">0.9775283164949903</span> Correlation: <span class="hljs-number">0.9887003168275968</span> 
</code></pre><p>In [4]:</p>
<pre><code><span class="hljs-section">#%% prediction import numpy as np print("Coefficient for X: ", linear<span class="hljs-emphasis">_reg.coef_</span>) print("Intercept for X: ", linear<span class="hljs-emphasis">_reg.intercept_</span>) print("Regression line is: y = " + str(linear<span class="hljs-emphasis">_reg.intercept_</span>[<span class="hljs-string">0</span>]) + " + (x <span class="hljs-emphasis">* " + str(linear_reg.coef_[<span class="hljs-string">0</span>][<span class="hljs-symbol">0</span>]) + ")") # maas = 1663 + 1138*</span>deneyim maas<span class="hljs-emphasis">_yeni = 1663 + 1138*11 print(maas_</span>yeni) array = np.array([11]).reshape(-1,1) print(linear<span class="hljs-emphasis">_reg.predict(array)) 

Coefficient for X: [[1138.34819698]] Intercept for X: [1663.89519747] Regression line is: y = 1663.8951974741067 + (x * 1138.3481969755717) 14181 [[14185.72536421]] </span></span>
</code></pre><p>In [5]:</p>
<pre><code># visualize <span class="hljs-type">line</span> <span class="hljs-keyword">array</span> = np.<span class="hljs-keyword">array</span>([<span class="hljs-number">0</span>,<span class="hljs-number">1</span>,<span class="hljs-number">2</span>,<span class="hljs-number">3</span>,<span class="hljs-number">4</span>,<span class="hljs-number">5</span>,<span class="hljs-number">6</span>,<span class="hljs-number">7</span>,<span class="hljs-number">8</span>,<span class="hljs-number">9</span>,<span class="hljs-number">10</span>,<span class="hljs-number">11</span>,<span class="hljs-number">12</span>,<span class="hljs-number">13</span>,<span class="hljs-number">14</span>,<span class="hljs-number">15</span>]).reshape(<span class="hljs-number">-1</span>,<span class="hljs-number">1</span>) # deneyim plt.scatter(x,y) #plt.<span class="hljs-keyword">show</span>() y_head = linear_reg.predict(<span class="hljs-keyword">array</span>) # maas plt.plot(<span class="hljs-keyword">array</span>, y_head,color = "red") <span class="hljs-keyword">array</span> = np.<span class="hljs-keyword">array</span>([<span class="hljs-number">100</span>]).reshape(<span class="hljs-number">-1</span>,<span class="hljs-number">1</span>) linear_reg.predict(<span class="hljs-keyword">array</span>) 
</code></pre><p>Out[5]:</p>
<pre><code><span class="hljs-attribute">array</span>([[<span class="hljs-number">115498</span>.<span class="hljs-number">71489503</span>]])
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___6_1.png" alt /></p>
<p>In [6]:</p>
<pre><code><span class="hljs-attribute">y_head</span> = linear_reg.predict(x) # maas from sklearn.metrics import r<span class="hljs-number">2</span>_score print(<span class="hljs-string">"r_square score: "</span>, r<span class="hljs-number">2</span>_score(y,y_head)) 

<span class="hljs-attribute">r_square</span> score: <span class="hljs-number">0</span>.<span class="hljs-number">9775283164949903</span> 
</code></pre><p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Multiple-Linear-Regression"></a></p>
<p>In [7]:</p>
<pre><code><span class="hljs-attribute">import</span> pandas as pd import numpy as np from sklearn.linear_model import LinearRegression data = pd.read_csv(<span class="hljs-string">"../input/multiplelinearregressiondataset/multiple-linear-regression-dataset.csv"</span>) print(data.info()) print(data.head()) #print(data.describe()) 

<span class="hljs-section">&lt;class 'pandas.core.frame.DataFrame'&gt;</span> <span class="hljs-attribute">RangeIndex</span>: <span class="hljs-number">14</span> entries, <span class="hljs-number">0</span> to <span class="hljs-number">13</span> Data columns (total <span class="hljs-number">3</span> columns): deneyim <span class="hljs-number">14</span> non-null float<span class="hljs-number">64</span> maas <span class="hljs-number">14</span> non-null int<span class="hljs-number">64</span> yas <span class="hljs-number">14</span> non-null int<span class="hljs-number">64</span> dtypes: float<span class="hljs-number">64</span>(<span class="hljs-number">1</span>), int<span class="hljs-number">64</span>(<span class="hljs-number">2</span>) memory usage: <span class="hljs-number">416</span>.<span class="hljs-number">0</span> bytes None deneyim maas yas <span class="hljs-number">0</span> <span class="hljs-number">0</span>.<span class="hljs-number">5</span> <span class="hljs-number">2500</span> <span class="hljs-number">22</span> <span class="hljs-number">1</span> <span class="hljs-number">0</span>.<span class="hljs-number">0</span> <span class="hljs-number">2250</span> <span class="hljs-number">21</span> <span class="hljs-number">2</span> <span class="hljs-number">1</span>.<span class="hljs-number">0</span> <span class="hljs-number">2750</span> <span class="hljs-number">23</span> <span class="hljs-number">3</span> <span class="hljs-number">5</span>.<span class="hljs-number">0</span> <span class="hljs-number">8000</span> <span class="hljs-number">25</span> <span class="hljs-number">4</span> <span class="hljs-number">8</span>.<span class="hljs-number">0</span> <span class="hljs-number">9000</span> <span class="hljs-number">28</span> 
</code></pre><p>In [8]:</p>
<pre><code><span class="hljs-attribute">x</span> = data.iloc[:,[<span class="hljs-number">0</span>,<span class="hljs-number">2</span>]].values y = data.maas.values.reshape(-<span class="hljs-number">1</span>,<span class="hljs-number">1</span>) multiple_linear_regression = LinearRegression() multiple_linear_regression.fit(x,y) print(<span class="hljs-string">"b0: "</span>,multiple_linear_regression.intercept_) print(<span class="hljs-string">"b1: "</span>, multiple_linear_regression.coef_) #predict x_ = np.array([[<span class="hljs-number">10</span>,<span class="hljs-number">35</span>],[<span class="hljs-number">5</span>,<span class="hljs-number">35</span>]]) multiple_linear_regression.predict(x_) y_head = multiple_linear_regression.predict(x) from sklearn.metrics import r<span class="hljs-number">2</span>_score print(<span class="hljs-string">"r_square score: "</span>, r<span class="hljs-number">2</span>_score(y,y_head)) 

<span class="hljs-attribute">b0</span>:<span class="hljs-meta"> [10376.62747228] b1: [[1525.50072054 -416.72218625]] r_square score: 0.9818393838730447 </span>
</code></pre><p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Polynomial-Linear-Regression"></a></p>
<p>In [9]:</p>
<pre><code><span class="hljs-attribute">import</span> pandas as pd import matplotlib.pyplot as plt data = pd.read_csv(<span class="hljs-string">"../input/polynomialregressioncsv/polynomial-regression.csv"</span>) print(data.info()) print(data.head()) #print(data.describe()) 

<span class="hljs-section">&lt;class 'pandas.core.frame.DataFrame'&gt;</span> <span class="hljs-attribute">RangeIndex</span>: <span class="hljs-number">15</span> entries, <span class="hljs-number">0</span> to <span class="hljs-number">14</span> Data columns (total <span class="hljs-number">2</span> columns): araba_fiyat <span class="hljs-number">15</span> non-null int<span class="hljs-number">64</span> araba_max_hiz <span class="hljs-number">15</span> non-null int<span class="hljs-number">64</span> dtypes: int<span class="hljs-number">64</span>(<span class="hljs-number">2</span>) memory usage: <span class="hljs-number">320</span>.<span class="hljs-number">0</span> bytes None araba_fiyat araba_max_hiz <span class="hljs-number">0</span> <span class="hljs-number">60</span> <span class="hljs-number">180</span> <span class="hljs-number">1</span> <span class="hljs-number">70</span> <span class="hljs-number">180</span> <span class="hljs-number">2</span> <span class="hljs-number">80</span> <span class="hljs-number">200</span> <span class="hljs-number">3</span> <span class="hljs-number">100</span> <span class="hljs-number">200</span> <span class="hljs-number">4</span> <span class="hljs-number">120</span> <span class="hljs-number">200</span> 
</code></pre><p>In [10]:</p>
<pre><code><span class="hljs-attribute">x</span> = data.araba_fiyat.values.reshape(-<span class="hljs-number">1</span>,<span class="hljs-number">1</span>) y = data.araba_max_hiz.values.reshape(-<span class="hljs-number">1</span>,<span class="hljs-number">1</span>) plt.scatter(x,y) plt.xlabel(<span class="hljs-string">"araba_max_hiz"</span>) plt.ylabel(<span class="hljs-string">"araba_fiyat"</span>) plt.show() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___13_0.png" alt /></p>
<p>In [11]:</p>
<pre><code># polynomial regression = y = b0 + b1*x +b2*x^<span class="hljs-number">2</span> + b3*x^<span class="hljs-number">3</span> + ... + bn*x^n <span class="hljs-keyword">from</span> sklearn.preprocessing <span class="hljs-keyword">import</span> PolynomialFeatures <span class="hljs-keyword">from</span> sklearn.linear_model <span class="hljs-keyword">import</span> LinearRegression polynominal_regression = PolynomialFeatures(degree=<span class="hljs-number">4</span>) x_polynomial = polynominal_regression.fit_transform(x,y) # %% fit linear_regression = LinearRegression() linear_regression.fit(x_polynomial,y) # %% y_head2 = linear_regression.predict(x_polynomial) plt.plot(x,y_head2,color= <span class="hljs-string">"green"</span>,label = <span class="hljs-string">"poly"</span>) plt.legend() plt.scatter(x,y) plt.xlabel(<span class="hljs-string">"araba_max_hiz"</span>) plt.ylabel(<span class="hljs-string">"araba_fiyat"</span>) plt.show() <span class="hljs-keyword">from</span> sklearn.metrics <span class="hljs-keyword">import</span> r2_score print(<span class="hljs-string">"r_square score: "</span>, r2_score(y,y_head2)) 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___14_0.png" alt /></p>
<pre><code><span class="hljs-attribute">r_square</span> score: <span class="hljs-number">0</span>.<span class="hljs-number">9694743023124649</span> 
</code></pre><p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Support-Vector-Regression"></a></p>
<p>In [12]:</p>
<pre><code><span class="hljs-attribute">import</span> pandas as pd import matplotlib.pyplot as plt data = pd.read_csv(<span class="hljs-string">"../input/support-vector-regression/maaslar.csv"</span>) print(data.info()) print(data.head()) #print(data.describe()) 

<span class="hljs-section">&lt;class 'pandas.core.frame.DataFrame'&gt;</span> <span class="hljs-attribute">RangeIndex</span>: <span class="hljs-number">10</span> entries, <span class="hljs-number">0</span> to <span class="hljs-number">9</span> Data columns (total <span class="hljs-number">3</span> columns): unvan <span class="hljs-number">10</span> non-null object Egitim Seviyesi <span class="hljs-number">10</span> non-null int<span class="hljs-number">64</span> maas <span class="hljs-number">10</span> non-null int<span class="hljs-number">64</span> dtypes: int<span class="hljs-number">64</span>(<span class="hljs-number">2</span>), object(<span class="hljs-number">1</span>) memory usage: <span class="hljs-number">320</span>.<span class="hljs-number">0</span>+ bytes None unvan Egitim Seviyesi maas <span class="hljs-number">0</span> Cayci <span class="hljs-number">1</span> <span class="hljs-number">2250</span> <span class="hljs-number">1</span> Sekreter <span class="hljs-number">2</span> <span class="hljs-number">2500</span> <span class="hljs-number">2</span> Uzman Yardimcisi <span class="hljs-number">3</span> <span class="hljs-number">3000</span> <span class="hljs-number">3</span> Uzman <span class="hljs-number">4</span> <span class="hljs-number">4000</span> <span class="hljs-number">4</span> Proje Yoneticisi <span class="hljs-number">5</span> <span class="hljs-number">5500</span> 
</code></pre><p>In [13]:</p>
<pre><code>x = data.iloc[:,<span class="hljs-number">1</span>:<span class="hljs-number">2</span>].<span class="hljs-keyword">values</span> y = data.iloc[:,<span class="hljs-number">2</span>:].<span class="hljs-keyword">values</span> plt.scatter(x,y) plt.xlabel("araba_max_hiz") plt.ylabel("araba_fiyat") plt.<span class="hljs-keyword">show</span>() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___17_0.png" alt /></p>
<p>In [14]:</p>
<pre><code><span class="hljs-comment">#verilerin olceklenmesi from sklearn.preprocessing import StandardScaler sc1 = StandardScaler() x_olcekli = sc1.fit_transform(x) sc2 = StandardScaler() y_olcekli = sc2.fit_transform(y) #%% SVR from sklearn.svm import SVR svr_reg = SVR(kernel = 'rbf') svr_reg.fit(x_olcekli,y_olcekli) y_head = svr_reg.predict(x_olcekli) # visualize line plt.plot(x_olcekli,y_head,color= "green",label = "SVR") plt.legend() plt.scatter(x_olcekli,y_olcekli,color='red') plt.show() print('R sq: ', svr_reg.score(x_olcekli, y_olcekli)) </span>

/opt/conda/lib/python3.6/site-packages/sklearn/utils/validation.py:585: DataConversionWarning: Data <span class="hljs-keyword">with</span> <span class="hljs-keyword">input</span> dtype int64 was converted <span class="hljs-keyword">to</span> float64 <span class="hljs-keyword">by</span> StandardScaler. warnings.warn(msg, DataConversionWarning) /opt/conda/lib/python3<span class="hljs-number">.6</span>/site-packages/sklearn/utils/validation.py:<span class="hljs-number">585</span>: DataConversionWarning: <span class="hljs-keyword">Data</span> <span class="hljs-keyword">with</span> <span class="hljs-keyword">input</span> dtype int64 was converted <span class="hljs-keyword">to</span> float64 <span class="hljs-keyword">by</span> StandardScaler. warnings.warn(msg, DataConversionWarning) /opt/conda/lib/python3<span class="hljs-number">.6</span>/site-packages/sklearn/utils/validation.py:<span class="hljs-number">585</span>: DataConversionWarning: <span class="hljs-keyword">Data</span> <span class="hljs-keyword">with</span> <span class="hljs-keyword">input</span> dtype int64 was converted <span class="hljs-keyword">to</span> float64 <span class="hljs-keyword">by</span> StandardScaler. warnings.warn(msg, DataConversionWarning) /opt/conda/lib/python3<span class="hljs-number">.6</span>/site-packages/sklearn/utils/validation.py:<span class="hljs-number">585</span>: DataConversionWarning: <span class="hljs-keyword">Data</span> <span class="hljs-keyword">with</span> <span class="hljs-keyword">input</span> dtype int64 was converted <span class="hljs-keyword">to</span> float64 <span class="hljs-keyword">by</span> StandardScaler. warnings.warn(msg, DataConversionWarning) /opt/conda/lib/python3<span class="hljs-number">.6</span>/site-packages/sklearn/utils/validation.py:<span class="hljs-number">747</span>: DataConversionWarning: A <span class="hljs-keyword">column</span>-vector y was passed <span class="hljs-keyword">when</span> a <span class="hljs-number">1</span>d <span class="hljs-built_in">array</span> was expected. Please <span class="hljs-keyword">change</span> the shape <span class="hljs-keyword">of</span> y <span class="hljs-keyword">to</span> (n_samples, ), <span class="hljs-keyword">for</span> example <span class="hljs-keyword">using</span> ravel(). y = column_or_1d(y, warn=<span class="hljs-literal">True</span>) 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___18_1.png" alt /></p>
<pre><code><span class="hljs-attribute">R</span> sq: <span class="hljs-number">0</span>.<span class="hljs-number">7513836788854973</span> 
</code></pre><p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Decision-Tree-Regression"></a></p>
<p>In [15]:</p>
<pre><code><span class="hljs-attribute">import</span> pandas as pd import matplotlib.pyplot as plt import numpy as np data = pd.read_csv(<span class="hljs-string">"../input/decisiontreeregressiondataset/decision-tree-regression-dataset.csv"</span>, header=None) print(data.info()) print(data.head()) #print(data.describe()) 

<span class="hljs-section">&lt;class 'pandas.core.frame.DataFrame'&gt;</span> <span class="hljs-attribute">RangeIndex</span>: <span class="hljs-number">10</span> entries, <span class="hljs-number">0</span> to <span class="hljs-number">9</span> Data columns (total <span class="hljs-number">2</span> columns): <span class="hljs-number">0</span> <span class="hljs-number">10</span> non-null int<span class="hljs-number">64</span> <span class="hljs-number">1</span> <span class="hljs-number">10</span> non-null int<span class="hljs-number">64</span> dtypes: int<span class="hljs-number">64</span>(<span class="hljs-number">2</span>) memory usage: <span class="hljs-number">240</span>.<span class="hljs-number">0</span> bytes None <span class="hljs-number">0</span> <span class="hljs-number">1</span> <span class="hljs-number">0</span> <span class="hljs-number">1</span> <span class="hljs-number">100</span> <span class="hljs-number">1</span> <span class="hljs-number">2</span> <span class="hljs-number">80</span> <span class="hljs-number">2</span> <span class="hljs-number">3</span> <span class="hljs-number">70</span> <span class="hljs-number">3</span> <span class="hljs-number">4</span> <span class="hljs-number">60</span> <span class="hljs-number">4</span> <span class="hljs-number">5</span> <span class="hljs-number">50</span> 
</code></pre><p>In [16]:</p>
<pre><code><span class="hljs-attribute">x</span> = data.iloc[:,[<span class="hljs-number">0</span>]].values.reshape(-<span class="hljs-number">1</span>,<span class="hljs-number">1</span>) y = data.iloc[:,[<span class="hljs-number">1</span>]].values.reshape(-<span class="hljs-number">1</span>,<span class="hljs-number">1</span>) 
</code></pre><p>In [17]:</p>
<pre><code>#%% decision tree regression <span class="hljs-keyword">from</span> sklearn.tree <span class="hljs-keyword">import</span> DecisionTreeRegressor tree_reg = DecisionTreeRegressor() tree_reg.fit(x,y) print(tree_reg.predict(np.<span class="hljs-keyword">array</span>([<span class="hljs-number">5.5</span>]).reshape(<span class="hljs-number">-1</span>,<span class="hljs-number">1</span>))) 

[<span class="hljs-number">50.</span>] 
</code></pre><p>In [18]:</p>
<pre><code><span class="hljs-attribute">x_</span> = np.arange(min(x),max(x),<span class="hljs-number">0</span>.<span class="hljs-number">01</span>).reshape(-<span class="hljs-number">1</span>,<span class="hljs-number">1</span>) #print(x) y_head = tree_reg.predict(x_) #print(y_head) # %% visualize plt.scatter(x,y,color=<span class="hljs-string">"red"</span>) plt.plot(x_,y_head,color = <span class="hljs-string">"green"</span>) plt.xlabel(<span class="hljs-string">"tribun level"</span>) plt.ylabel(<span class="hljs-string">"ucret"</span>) plt.show() y_head = tree_reg.predict(x) #from sklearn.metrics import r<span class="hljs-number">2</span>_score print(<span class="hljs-string">"r_square score: "</span>, r<span class="hljs-number">2</span>_score(y,y_head)) 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___23_0.png" alt /></p>
<pre><code><span class="hljs-attribute">r_square</span> score: <span class="hljs-number">1</span>.<span class="hljs-number">0</span> 
</code></pre><p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Random-Forest-Regression"></a></p>
<p>In [19]:</p>
<pre><code><span class="hljs-attribute">import</span> pandas as pd import matplotlib.pyplot as plt import numpy as np data = pd.read_csv(<span class="hljs-string">"../input/randomforestregressiondataset/random-forest-regression-dataset.csv"</span>, header=None) print(data.info()) print(data.head()) #print(data.describe()) 

<span class="hljs-section">&lt;class 'pandas.core.frame.DataFrame'&gt;</span> <span class="hljs-attribute">RangeIndex</span>: <span class="hljs-number">10</span> entries, <span class="hljs-number">0</span> to <span class="hljs-number">9</span> Data columns (total <span class="hljs-number">2</span> columns): <span class="hljs-number">0</span> <span class="hljs-number">10</span> non-null int<span class="hljs-number">64</span> <span class="hljs-number">1</span> <span class="hljs-number">10</span> non-null int<span class="hljs-number">64</span> dtypes: int<span class="hljs-number">64</span>(<span class="hljs-number">2</span>) memory usage: <span class="hljs-number">240</span>.<span class="hljs-number">0</span> bytes None <span class="hljs-number">0</span> <span class="hljs-number">1</span> <span class="hljs-number">0</span> <span class="hljs-number">1</span> <span class="hljs-number">100</span> <span class="hljs-number">1</span> <span class="hljs-number">2</span> <span class="hljs-number">80</span> <span class="hljs-number">2</span> <span class="hljs-number">3</span> <span class="hljs-number">70</span> <span class="hljs-number">3</span> <span class="hljs-number">4</span> <span class="hljs-number">60</span> <span class="hljs-number">4</span> <span class="hljs-number">5</span> <span class="hljs-number">50</span> 
</code></pre><p>In [20]:</p>
<pre><code><span class="hljs-attribute">x</span> = data.iloc[:,<span class="hljs-number">0</span>].values.reshape(-<span class="hljs-number">1</span>,<span class="hljs-number">1</span>) y = data.iloc[:,<span class="hljs-number">1</span>].values.reshape(-<span class="hljs-number">1</span>,<span class="hljs-number">1</span>) 
</code></pre><p>In [21]:</p>
<pre><code><span class="hljs-keyword">from</span> sklearn.ensemble <span class="hljs-keyword">import</span> RandomForestRegressor rf = RandomForestRegressor(n_estimators = <span class="hljs-number">100</span>, random_state= <span class="hljs-number">42</span>) rf.fit(x,y) print("7.8 seviyesinde fiyatın ne kadar olduğu: ",rf.predict(np.<span class="hljs-keyword">array</span>([<span class="hljs-number">7.8</span>]).reshape(<span class="hljs-number">-1</span>,<span class="hljs-number">1</span>))) x_ = np.arange(min(x),max(x),<span class="hljs-number">0.01</span>).reshape(<span class="hljs-number">-1</span>,<span class="hljs-number">1</span>) y_head = rf.predict(x_) 

<span class="hljs-number">7.8</span> seviyesinde fiyatın ne kadar olduğu: [<span class="hljs-number">22.7</span>] 

/opt/conda/lib/python3<span class="hljs-number">.6</span>/site-packages/ipykernel_launcher.py:<span class="hljs-number">3</span>: DataConversionWarning: A <span class="hljs-keyword">column</span>-vector y was passed <span class="hljs-keyword">when</span> a <span class="hljs-number">1</span>d <span class="hljs-keyword">array</span> was expected. Please change the shape <span class="hljs-keyword">of</span> y <span class="hljs-keyword">to</span> (n_samples,), <span class="hljs-keyword">for</span> example <span class="hljs-keyword">using</span> ravel(). This <span class="hljs-keyword">is</span> separate <span class="hljs-keyword">from</span> the ipykernel package so we can avoid doing imports <span class="hljs-keyword">until</span> 
</code></pre><p>In [22]:</p>
<pre><code># <span class="hljs-selector-tag">visualize</span> <span class="hljs-selector-tag">plt</span><span class="hljs-selector-class">.scatter</span>(x,y,color=<span class="hljs-string">"red"</span>) <span class="hljs-selector-tag">plt</span><span class="hljs-selector-class">.plot</span>(x_,y_head,color=<span class="hljs-string">"green"</span>) <span class="hljs-selector-tag">plt</span><span class="hljs-selector-class">.xlabel</span>(<span class="hljs-string">"tribun level"</span>) <span class="hljs-selector-tag">plt</span><span class="hljs-selector-class">.ylabel</span>(<span class="hljs-string">"ucret"</span>) <span class="hljs-selector-tag">plt</span><span class="hljs-selector-class">.show</span>() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___28_0.png" alt /></p>
<p>In [23]:</p>
<pre><code><span class="hljs-attribute">y_head</span> = rf.predict(x) from sklearn.metrics import r<span class="hljs-number">2</span>_score print(<span class="hljs-string">"r_score: "</span>, r<span class="hljs-number">2</span>_score(y,y_head)) 

<span class="hljs-attribute">r_score</span>: <span class="hljs-number">0</span>.<span class="hljs-number">9798724794092587</span> 
</code></pre><h1 id="classificationhttpswwwkagglecombulentsiyahmachine-learning-exerciseclassification">Classification<a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Classification"></a></h1>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#K-Nearest-Neighbour-(KNN"></a>-Classification)</p>
<p>In [24]:</p>
<pre><code><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-keyword">data</span> = pd.read_csv(<span class="hljs-string">"../input/classification/data.csv"</span>) #print(<span class="hljs-keyword">data</span>.info()) #print(<span class="hljs-keyword">data</span>.head()) #print(<span class="hljs-keyword">data</span>.describe()) # %% <span class="hljs-keyword">data</span>.drop([<span class="hljs-string">"id"</span>,<span class="hljs-string">"Unnamed: 32"</span>],axis=<span class="hljs-number">1</span>,inplace=True) <span class="hljs-keyword">data</span>.tail() # malignant = M kotu huylu tumor # benign = B iyi huylu tumor 
</code></pre><p>In [25]:</p>
<pre><code># %% M = <span class="hljs-keyword">data</span>[<span class="hljs-keyword">data</span>.diagnosis == <span class="hljs-string">"M"</span>] B = <span class="hljs-keyword">data</span>[<span class="hljs-keyword">data</span>.diagnosis == <span class="hljs-string">"B"</span>] # scatter plot plt.scatter(M.radius_mean,M.texture_mean,color=<span class="hljs-string">"red"</span>,label=<span class="hljs-string">"kotu"</span>,alpha= <span class="hljs-number">0.3</span>) plt.scatter(B.radius_mean,B.texture_mean,color=<span class="hljs-string">"green"</span>,label=<span class="hljs-string">"iyi"</span>,alpha= <span class="hljs-number">0.3</span>) plt.xlabel(<span class="hljs-string">"radius_mean"</span>) plt.ylabel(<span class="hljs-string">"texture_mean"</span>) plt.legend() plt.show() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___32_0.png" alt /></p>
<p>In [26]:</p>
<pre><code># %% data.diagnosis = [<span class="hljs-number">1</span> <span class="hljs-keyword">if</span> <span class="hljs-keyword">each</span> == "M" <span class="hljs-keyword">else</span> <span class="hljs-number">0</span> <span class="hljs-keyword">for</span> <span class="hljs-keyword">each</span> <span class="hljs-keyword">in</span> data.diagnosis] y = data.diagnosis.<span class="hljs-keyword">values</span> x_data = data.<span class="hljs-keyword">drop</span>(["diagnosis"],axis=<span class="hljs-number">1</span>) # %% # normalization x = (x_data - np.min(x_data))/(np.max(x_data)-np.min(x_data)) 
</code></pre><p>In [27]:</p>
<pre><code>#%% # train test split <span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> train_test_split x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = <span class="hljs-number">0.3</span>,random_state=<span class="hljs-number">1</span>) # %% # knn model <span class="hljs-keyword">from</span> sklearn.neighbors <span class="hljs-keyword">import</span> KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors = <span class="hljs-number">3</span>) # n_neighbors = k knn.fit(x_train,y_train) prediction = knn.predict(x_test) print(" {} nn score: {} ".format(<span class="hljs-number">3</span>,knn.score(x_test,y_test))) 

 <span class="hljs-number">3</span> nn score: <span class="hljs-number">0.9532163742690059</span> 
</code></pre><p>In [28]:</p>
<pre><code># %% # find k <span class="hljs-keyword">value</span> score_list = [] <span class="hljs-keyword">for</span> <span class="hljs-keyword">each</span> <span class="hljs-keyword">in</span> range(<span class="hljs-number">1</span>,<span class="hljs-number">15</span>): knn2 = KNeighborsClassifier(n_neighbors = <span class="hljs-keyword">each</span>) knn2.fit(x_train,y_train) score_list.append(knn2.score(x_test,y_test)) plt.plot(range(<span class="hljs-number">1</span>,<span class="hljs-number">15</span>),score_list) plt.xlabel("k values") plt.ylabel("accuracy") plt.<span class="hljs-keyword">show</span>() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___35_0.png" alt /></p>
<p>In [29]:</p>
<pre><code><span class="hljs-comment"># %% # knn model knn = KNeighborsClassifier(n_neighbors = 8) # n_neighbors = k knn.fit(x_train,y_train) prediction = knn.predict(x_test) print(" {} nn score: {} ".format(3,knn.score(x_test,y_test))) </span>

 <span class="hljs-attribute">3</span> nn score: <span class="hljs-number">0</span>.<span class="hljs-number">9649122807017544</span> 
</code></pre><p>In [30]:</p>
<pre><code>#%% confusion matrix y_pred = knn.predict(x_test) y_true = y_test <span class="hljs-keyword">from</span> sklearn.metrics <span class="hljs-keyword">import</span> confusion_matrix cm = confusion_matrix(y_true,y_pred) # %% cm visualization <span class="hljs-keyword">import</span> seaborn <span class="hljs-keyword">as</span> sns f, ax = plt.subplots(figsize =(<span class="hljs-number">5</span>,<span class="hljs-number">5</span>)) sns.heatmap(cm,annot = <span class="hljs-keyword">True</span>,linewidths=<span class="hljs-number">0.5</span>,linecolor="red",fmt = ".0f",ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.<span class="hljs-keyword">show</span>() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___37_0.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Support-Vector-Machine-(SVM"></a>-Classification)</p>
<p>In [31]:</p>
<pre><code><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-keyword">data</span> = pd.read_csv(<span class="hljs-string">"../input/classification/data.csv"</span>) #print(<span class="hljs-keyword">data</span>.info()) #print(<span class="hljs-keyword">data</span>.head()) #print(<span class="hljs-keyword">data</span>.describe()) # %% <span class="hljs-keyword">data</span>.drop([<span class="hljs-string">"id"</span>,<span class="hljs-string">"Unnamed: 32"</span>],axis=<span class="hljs-number">1</span>,inplace=True) <span class="hljs-keyword">data</span>.tail() # malignant = M kotu huylu tumor # benign = B iyi huylu tumor 
</code></pre><p>In [32]:</p>
<pre><code># %% M = <span class="hljs-keyword">data</span>[<span class="hljs-keyword">data</span>.diagnosis == <span class="hljs-string">"M"</span>] B = <span class="hljs-keyword">data</span>[<span class="hljs-keyword">data</span>.diagnosis == <span class="hljs-string">"B"</span>] # scatter plot plt.scatter(M.radius_mean,M.texture_mean,color=<span class="hljs-string">"red"</span>,label=<span class="hljs-string">"kotu"</span>,alpha= <span class="hljs-number">0.3</span>) plt.scatter(B.radius_mean,B.texture_mean,color=<span class="hljs-string">"green"</span>,label=<span class="hljs-string">"iyi"</span>,alpha= <span class="hljs-number">0.3</span>) plt.xlabel(<span class="hljs-string">"radius_mean"</span>) plt.ylabel(<span class="hljs-string">"texture_mean"</span>) plt.legend() plt.show() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___40_0.png" alt /></p>
<p>In [33]:</p>
<pre><code># %% data.diagnosis = [<span class="hljs-number">1</span> <span class="hljs-keyword">if</span> <span class="hljs-keyword">each</span> == "M" <span class="hljs-keyword">else</span> <span class="hljs-number">0</span> <span class="hljs-keyword">for</span> <span class="hljs-keyword">each</span> <span class="hljs-keyword">in</span> data.diagnosis] y = data.diagnosis.<span class="hljs-keyword">values</span> x_data = data.<span class="hljs-keyword">drop</span>(["diagnosis"],axis=<span class="hljs-number">1</span>) # %% # normalization x = (x_data - np.min(x_data))/(np.max(x_data)-np.min(x_data)) 
</code></pre><p>In [34]:</p>
<pre><code>#%% # train test split <span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> train_test_split x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = <span class="hljs-number">0.3</span>,random_state=<span class="hljs-number">1</span>) # %% SVM <span class="hljs-keyword">from</span> sklearn.svm <span class="hljs-keyword">import</span> SVC svm = SVC(random_state = <span class="hljs-number">1</span>) svm.fit(x_train,y_train) # %% test print("print accuracy of svm algo: ",svm.score(x_test,y_test)) 

print accuracy <span class="hljs-keyword">of</span> svm algo: <span class="hljs-number">0.9532163742690059</span> 

/opt/conda/lib/python3<span class="hljs-number">.6</span>/site-packages/sklearn/svm/base.py:<span class="hljs-number">196</span>: FutureWarning: The <span class="hljs-keyword">default</span> <span class="hljs-keyword">value</span> <span class="hljs-keyword">of</span> gamma will change <span class="hljs-keyword">from</span> <span class="hljs-string">'auto'</span> <span class="hljs-keyword">to</span> <span class="hljs-string">'scale'</span> <span class="hljs-keyword">in</span> <span class="hljs-keyword">version</span> <span class="hljs-number">0.22</span> <span class="hljs-keyword">to</span> account better <span class="hljs-keyword">for</span> unscaled features. <span class="hljs-keyword">Set</span> gamma explicitly <span class="hljs-keyword">to</span> <span class="hljs-string">'auto'</span> <span class="hljs-keyword">or</span> <span class="hljs-string">'scale'</span> <span class="hljs-keyword">to</span> avoid this <span class="hljs-built_in">warning</span>. "avoid this warning.", FutureWarning) 
</code></pre><p>In [35]:</p>
<pre><code>#%% confusion matrix y_pred = svm.predict(x_test) y_true = y_test <span class="hljs-keyword">from</span> sklearn.metrics <span class="hljs-keyword">import</span> confusion_matrix cm = confusion_matrix(y_true,y_pred) # %% cm visualization <span class="hljs-keyword">import</span> seaborn <span class="hljs-keyword">as</span> sns f, ax = plt.subplots(figsize =(<span class="hljs-number">5</span>,<span class="hljs-number">5</span>)) sns.heatmap(cm,annot = <span class="hljs-keyword">True</span>,linewidths=<span class="hljs-number">0.5</span>,linecolor="red",fmt = ".0f",ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.<span class="hljs-keyword">show</span>() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___43_0.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Naive-Bayes-Classification"></a></p>
<p>In [36]:</p>
<pre><code><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-keyword">data</span> = pd.read_csv(<span class="hljs-string">"../input/classification/data.csv"</span>) #print(<span class="hljs-keyword">data</span>.info()) #print(<span class="hljs-keyword">data</span>.head()) #print(<span class="hljs-keyword">data</span>.describe()) # %% <span class="hljs-keyword">data</span>.drop([<span class="hljs-string">"id"</span>,<span class="hljs-string">"Unnamed: 32"</span>],axis=<span class="hljs-number">1</span>,inplace=True) <span class="hljs-keyword">data</span>.tail() # malignant = M kotu huylu tumor # benign = B iyi huylu tumor 
</code></pre><p>In [37]:</p>
<pre><code># %% M = <span class="hljs-keyword">data</span>[<span class="hljs-keyword">data</span>.diagnosis == <span class="hljs-string">"M"</span>] B = <span class="hljs-keyword">data</span>[<span class="hljs-keyword">data</span>.diagnosis == <span class="hljs-string">"B"</span>] # scatter plot plt.scatter(M.radius_mean,M.texture_mean,color=<span class="hljs-string">"red"</span>,label=<span class="hljs-string">"kotu"</span>,alpha= <span class="hljs-number">0.3</span>) plt.scatter(B.radius_mean,B.texture_mean,color=<span class="hljs-string">"green"</span>,label=<span class="hljs-string">"iyi"</span>,alpha= <span class="hljs-number">0.3</span>) plt.xlabel(<span class="hljs-string">"radius_mean"</span>) plt.ylabel(<span class="hljs-string">"texture_mean"</span>) plt.legend() plt.show() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___46_0.png" alt /></p>
<p>In [38]:</p>
<pre><code># %% data.diagnosis = [<span class="hljs-number">1</span> <span class="hljs-keyword">if</span> <span class="hljs-keyword">each</span> == "M" <span class="hljs-keyword">else</span> <span class="hljs-number">0</span> <span class="hljs-keyword">for</span> <span class="hljs-keyword">each</span> <span class="hljs-keyword">in</span> data.diagnosis] y = data.diagnosis.<span class="hljs-keyword">values</span> x_data = data.<span class="hljs-keyword">drop</span>(["diagnosis"],axis=<span class="hljs-number">1</span>) # %% # normalization x = (x_data - np.min(x_data))/(np.max(x_data)-np.min(x_data)) 
</code></pre><p>In [39]:</p>
<pre><code>#%% # train test split <span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> train_test_split x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = <span class="hljs-number">0.3</span>,random_state=<span class="hljs-number">1</span>) # %% Naive bayes <span class="hljs-keyword">from</span> sklearn.naive_bayes <span class="hljs-keyword">import</span> GaussianNB nb = GaussianNB() nb.fit(x_train,y_train) nb.score(x_test,y_test) # %% test print("print accuracy of naive bayes algo: ",nb.score(x_test,y_test)) 

print accuracy <span class="hljs-keyword">of</span> naive bayes algo: <span class="hljs-number">0.935672514619883</span> 
</code></pre><p>In [40]:</p>
<pre><code>#%% confusion matrix y_pred = nb.predict(x_test) y_true = y_test <span class="hljs-keyword">from</span> sklearn.metrics <span class="hljs-keyword">import</span> confusion_matrix cm = confusion_matrix(y_true,y_pred) # %% cm visualization <span class="hljs-keyword">import</span> seaborn <span class="hljs-keyword">as</span> sns f, ax = plt.subplots(figsize =(<span class="hljs-number">5</span>,<span class="hljs-number">5</span>)) sns.heatmap(cm,annot = <span class="hljs-keyword">True</span>,linewidths=<span class="hljs-number">0.5</span>,linecolor="red",fmt = ".0f",ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.<span class="hljs-keyword">show</span>() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___49_0.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Decision-Tree-Classification"></a></p>
<p>In [41]:</p>
<pre><code><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-keyword">data</span> = pd.read_csv(<span class="hljs-string">"../input/classification/data.csv"</span>) #print(<span class="hljs-keyword">data</span>.info()) #print(<span class="hljs-keyword">data</span>.head()) #print(<span class="hljs-keyword">data</span>.describe()) # %% <span class="hljs-keyword">data</span>.drop([<span class="hljs-string">"id"</span>,<span class="hljs-string">"Unnamed: 32"</span>],axis=<span class="hljs-number">1</span>,inplace=True) <span class="hljs-keyword">data</span>.tail() # malignant = M kotu huylu tumor # benign = B iyi huylu tumor 
</code></pre><p>In [42]:</p>
<pre><code># %% M = <span class="hljs-keyword">data</span>[<span class="hljs-keyword">data</span>.diagnosis == <span class="hljs-string">"M"</span>] B = <span class="hljs-keyword">data</span>[<span class="hljs-keyword">data</span>.diagnosis == <span class="hljs-string">"B"</span>] # scatter plot plt.scatter(M.radius_mean,M.texture_mean,color=<span class="hljs-string">"red"</span>,label=<span class="hljs-string">"kotu"</span>,alpha= <span class="hljs-number">0.3</span>) plt.scatter(B.radius_mean,B.texture_mean,color=<span class="hljs-string">"green"</span>,label=<span class="hljs-string">"iyi"</span>,alpha= <span class="hljs-number">0.3</span>) plt.xlabel(<span class="hljs-string">"radius_mean"</span>) plt.ylabel(<span class="hljs-string">"texture_mean"</span>) plt.legend() plt.show() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___52_0.png" alt /></p>
<p>In [43]:</p>
<pre><code># %% data.diagnosis = [<span class="hljs-number">1</span> <span class="hljs-keyword">if</span> <span class="hljs-keyword">each</span> == "M" <span class="hljs-keyword">else</span> <span class="hljs-number">0</span> <span class="hljs-keyword">for</span> <span class="hljs-keyword">each</span> <span class="hljs-keyword">in</span> data.diagnosis] y = data.diagnosis.<span class="hljs-keyword">values</span> x_data = data.<span class="hljs-keyword">drop</span>(["diagnosis"],axis=<span class="hljs-number">1</span>) # %% # normalization x = (x_data - np.min(x_data))/(np.max(x_data)-np.min(x_data)) 
</code></pre><p>In [44]:</p>
<pre><code># %% train test split <span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> train_test_split x_train, x_test,y_train, y_test = train_test_split(x,y,test_size = <span class="hljs-number">0.15</span>,random_state = <span class="hljs-number">42</span>) #%% <span class="hljs-keyword">from</span> sklearn.tree <span class="hljs-keyword">import</span> DecisionTreeClassifier dt = DecisionTreeClassifier() dt.fit(x_train,y_train) print("score: ", dt.score(x_test,y_test)) 

score: <span class="hljs-number">0.9302325581395349</span> 
</code></pre><p>In [45]:</p>
<pre><code>#%% confusion matrix y_pred = dt.predict(x_test) y_true = y_test <span class="hljs-keyword">from</span> sklearn.metrics <span class="hljs-keyword">import</span> confusion_matrix cm = confusion_matrix(y_true,y_pred) # %% cm visualization <span class="hljs-keyword">import</span> seaborn <span class="hljs-keyword">as</span> sns f, ax = plt.subplots(figsize =(<span class="hljs-number">5</span>,<span class="hljs-number">5</span>)) sns.heatmap(cm,annot = <span class="hljs-keyword">True</span>,linewidths=<span class="hljs-number">0.5</span>,linecolor="red",fmt = ".0f",ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.<span class="hljs-keyword">show</span>() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___55_0.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Random-Forest-Classification"></a></p>
<p>In [46]:</p>
<pre><code><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-keyword">data</span> = pd.read_csv(<span class="hljs-string">"../input/classification/data.csv"</span>) #print(<span class="hljs-keyword">data</span>.info()) #print(<span class="hljs-keyword">data</span>.head()) #print(<span class="hljs-keyword">data</span>.describe()) # %% <span class="hljs-keyword">data</span>.drop([<span class="hljs-string">"id"</span>,<span class="hljs-string">"Unnamed: 32"</span>],axis=<span class="hljs-number">1</span>,inplace=True) <span class="hljs-keyword">data</span>.tail() # malignant = M kotu huylu tumor # benign = B iyi huylu tumor 
</code></pre><p>In [47]:</p>
<pre><code># %% M = <span class="hljs-keyword">data</span>[<span class="hljs-keyword">data</span>.diagnosis == <span class="hljs-string">"M"</span>] B = <span class="hljs-keyword">data</span>[<span class="hljs-keyword">data</span>.diagnosis == <span class="hljs-string">"B"</span>] # scatter plot plt.scatter(M.radius_mean,M.texture_mean,color=<span class="hljs-string">"red"</span>,label=<span class="hljs-string">"kotu"</span>,alpha= <span class="hljs-number">0.3</span>) plt.scatter(B.radius_mean,B.texture_mean,color=<span class="hljs-string">"green"</span>,label=<span class="hljs-string">"iyi"</span>,alpha= <span class="hljs-number">0.3</span>) plt.xlabel(<span class="hljs-string">"radius_mean"</span>) plt.ylabel(<span class="hljs-string">"texture_mean"</span>) plt.legend() plt.show() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___58_0.png" alt /></p>
<p>In [48]:</p>
<pre><code># %% data.diagnosis = [<span class="hljs-number">1</span> <span class="hljs-keyword">if</span> <span class="hljs-keyword">each</span> == "M" <span class="hljs-keyword">else</span> <span class="hljs-number">0</span> <span class="hljs-keyword">for</span> <span class="hljs-keyword">each</span> <span class="hljs-keyword">in</span> data.diagnosis] y = data.diagnosis.<span class="hljs-keyword">values</span> x_data = data.<span class="hljs-keyword">drop</span>(["diagnosis"],axis=<span class="hljs-number">1</span>) # %% # normalization x = (x_data - np.min(x_data))/(np.max(x_data)-np.min(x_data)) 
</code></pre><p>In [49]:</p>
<pre><code># %% train test split <span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> train_test_split x_train, x_test,y_train, y_test = train_test_split(x,y,test_size = <span class="hljs-number">0.15</span>,random_state = <span class="hljs-number">42</span>) #%% random forest <span class="hljs-keyword">from</span> sklearn.ensemble <span class="hljs-keyword">import</span> RandomForestClassifier rf = RandomForestClassifier(n_estimators = <span class="hljs-number">100</span>,random_state = <span class="hljs-number">1</span>) rf.fit(x_train,y_train) print("random forest algo result: ",rf.score(x_test,y_test)) 

random forest algo result: <span class="hljs-number">0.9534883720930233</span> 
</code></pre><p>In [50]:</p>
<pre><code>#%% confusion matrix y_pred = rf.predict(x_test) y_true = y_test <span class="hljs-keyword">from</span> sklearn.metrics <span class="hljs-keyword">import</span> confusion_matrix cm = confusion_matrix(y_true,y_pred) # %% cm visualization <span class="hljs-keyword">import</span> seaborn <span class="hljs-keyword">as</span> sns f, ax = plt.subplots(figsize =(<span class="hljs-number">5</span>,<span class="hljs-number">5</span>)) sns.heatmap(cm,annot = <span class="hljs-keyword">True</span>,linewidths=<span class="hljs-number">0.5</span>,linecolor="red",fmt = ".0f",ax=ax) plt.xlabel("y_pred") plt.ylabel("y_true") plt.<span class="hljs-keyword">show</span>() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___61_0.png" alt /></p>
<h1 id="clusteringhttpswwwkagglecombulentsiyahmachine-learning-exerciseclustering">Clustering<a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Clustering"></a></h1>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#K-Means-Clustering"></a></p>
<p>In [51]:</p>
<pre><code><span class="hljs-attribute">import</span> numpy as np import pandas as pd import matplotlib.pyplot as plt # %% create dataset # class<span class="hljs-number">1</span> x<span class="hljs-number">1</span> = np.random.normal(<span class="hljs-number">25</span>,<span class="hljs-number">5</span>,<span class="hljs-number">1000</span>) y<span class="hljs-number">1</span> = np.random.normal(<span class="hljs-number">25</span>,<span class="hljs-number">5</span>,<span class="hljs-number">1000</span>) # class<span class="hljs-number">2</span> x<span class="hljs-number">2</span> = np.random.normal(<span class="hljs-number">55</span>,<span class="hljs-number">5</span>,<span class="hljs-number">1000</span>) y<span class="hljs-number">2</span> = np.random.normal(<span class="hljs-number">60</span>,<span class="hljs-number">5</span>,<span class="hljs-number">1000</span>) # class<span class="hljs-number">3</span> x<span class="hljs-number">3</span> = np.random.normal(<span class="hljs-number">55</span>,<span class="hljs-number">5</span>,<span class="hljs-number">1000</span>) y<span class="hljs-number">3</span> = np.random.normal(<span class="hljs-number">15</span>,<span class="hljs-number">5</span>,<span class="hljs-number">1000</span>) x = np.concatenate((x<span class="hljs-number">1</span>,x<span class="hljs-number">2</span>,x<span class="hljs-number">3</span>),axis = <span class="hljs-number">0</span>) y = np.concatenate((y<span class="hljs-number">1</span>,y<span class="hljs-number">2</span>,y<span class="hljs-number">3</span>),axis = <span class="hljs-number">0</span>) dictionary = {<span class="hljs-string">"x"</span>:x,<span class="hljs-string">"y"</span>:y} data = pd.DataFrame(dictionary) plt.scatter(x<span class="hljs-number">1</span>,y<span class="hljs-number">1</span>) plt.scatter(x<span class="hljs-number">2</span>,y<span class="hljs-number">2</span>) plt.scatter(x<span class="hljs-number">3</span>,y<span class="hljs-number">3</span>) plt.show() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___63_0.png" alt /></p>
<p>In [52]:</p>
<pre><code># %% KMEANS <span class="hljs-keyword">from</span> sklearn.<span class="hljs-keyword">cluster</span> <span class="hljs-keyword">import</span> KMeans wcss = [] <span class="hljs-keyword">for</span> k <span class="hljs-keyword">in</span> range(<span class="hljs-number">1</span>,<span class="hljs-number">15</span>): kmeans = KMeans(n_clusters=k) kmeans.fit(data) wcss.append(kmeans.inertia_) plt.plot(range(<span class="hljs-number">1</span>,<span class="hljs-number">15</span>),wcss) plt.xlabel("number of k (cluster) value") plt.ylabel("wcss") plt.<span class="hljs-keyword">show</span>() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___64_0.png" alt /></p>
<p>In [53]:</p>
<pre><code>#%% k = <span class="hljs-number">3</span> icin modelim kmeans2 = KMeans(n_clusters=<span class="hljs-number">3</span>) clusters = kmeans2.fit_predict(<span class="hljs-keyword">data</span>) <span class="hljs-keyword">data</span>[<span class="hljs-string">"label"</span>] = clusters plt.scatter(<span class="hljs-keyword">data</span>.x[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">0</span> ],<span class="hljs-keyword">data</span>.y[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">0</span>],color = <span class="hljs-string">"red"</span>) plt.scatter(<span class="hljs-keyword">data</span>.x[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">1</span> ],<span class="hljs-keyword">data</span>.y[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">1</span>],color = <span class="hljs-string">"green"</span>) plt.scatter(<span class="hljs-keyword">data</span>.x[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">2</span> ],<span class="hljs-keyword">data</span>.y[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">2</span>],color = <span class="hljs-string">"blue"</span>) plt.scatter(kmeans2.cluster_centers_[:,<span class="hljs-number">0</span>],kmeans2.cluster_centers_[:,<span class="hljs-number">1</span>],color = <span class="hljs-string">"yellow"</span>) plt.show() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___65_0.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Hierarchical-Clustering"></a></p>
<p>In [54]:</p>
<pre><code><span class="hljs-attribute">import</span> numpy as np import pandas as pd import matplotlib.pyplot as plt # %% create dataset # class<span class="hljs-number">1</span> x<span class="hljs-number">1</span> = np.random.normal(<span class="hljs-number">25</span>,<span class="hljs-number">5</span>,<span class="hljs-number">100</span>) y<span class="hljs-number">1</span> = np.random.normal(<span class="hljs-number">25</span>,<span class="hljs-number">5</span>,<span class="hljs-number">100</span>) # class<span class="hljs-number">2</span> x<span class="hljs-number">2</span> = np.random.normal(<span class="hljs-number">55</span>,<span class="hljs-number">5</span>,<span class="hljs-number">100</span>) y<span class="hljs-number">2</span> = np.random.normal(<span class="hljs-number">60</span>,<span class="hljs-number">5</span>,<span class="hljs-number">100</span>) # class<span class="hljs-number">3</span> x<span class="hljs-number">3</span> = np.random.normal(<span class="hljs-number">55</span>,<span class="hljs-number">5</span>,<span class="hljs-number">100</span>) y<span class="hljs-number">3</span> = np.random.normal(<span class="hljs-number">15</span>,<span class="hljs-number">5</span>,<span class="hljs-number">100</span>) x = np.concatenate((x<span class="hljs-number">1</span>,x<span class="hljs-number">2</span>,x<span class="hljs-number">3</span>),axis = <span class="hljs-number">0</span>) y = np.concatenate((y<span class="hljs-number">1</span>,y<span class="hljs-number">2</span>,y<span class="hljs-number">3</span>),axis = <span class="hljs-number">0</span>) dictionary = {<span class="hljs-string">"x"</span>:x,<span class="hljs-string">"y"</span>:y} data = pd.DataFrame(dictionary) plt.scatter(x<span class="hljs-number">1</span>,y<span class="hljs-number">1</span>,color=<span class="hljs-string">"black"</span>) plt.scatter(x<span class="hljs-number">2</span>,y<span class="hljs-number">2</span>,color=<span class="hljs-string">"black"</span>) plt.scatter(x<span class="hljs-number">3</span>,y<span class="hljs-number">3</span>,color=<span class="hljs-string">"black"</span>) plt.show() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___67_0.png" alt /></p>
<p>In [55]:</p>
<pre><code># %% dendogram <span class="hljs-keyword">from</span> scipy.<span class="hljs-keyword">cluster</span>.hierarchy <span class="hljs-keyword">import</span> linkage, dendrogram merg = linkage(data,<span class="hljs-keyword">method</span>="ward") dendrogram(merg,leaf_rotation = <span class="hljs-number">90</span>) plt.xlabel("data points") plt.ylabel("euclidean distance") plt.<span class="hljs-keyword">show</span>() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___68_0.png" alt /></p>
<p>In [56]:</p>
<pre><code># %% HC from sklearn.cluster <span class="hljs-keyword">import</span> AgglomerativeClustering hiyerartical_cluster = AgglomerativeClustering(n_clusters = <span class="hljs-number">3</span>,affinity= <span class="hljs-string">"euclidean"</span>,linkage = <span class="hljs-string">"ward"</span>) cluster = hiyerartical_cluster.fit_predict(<span class="hljs-keyword">data</span>) <span class="hljs-keyword">data</span>[<span class="hljs-string">"label"</span>] = cluster plt.scatter(<span class="hljs-keyword">data</span>.x[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">0</span> ],<span class="hljs-keyword">data</span>.y[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">0</span>],color = <span class="hljs-string">"red"</span>) plt.scatter(<span class="hljs-keyword">data</span>.x[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">1</span> ],<span class="hljs-keyword">data</span>.y[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">1</span>],color = <span class="hljs-string">"green"</span>) plt.scatter(<span class="hljs-keyword">data</span>.x[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">2</span> ],<span class="hljs-keyword">data</span>.y[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">2</span>],color = <span class="hljs-string">"blue"</span>) #plt.scatter(<span class="hljs-keyword">data</span>.x[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">3</span> ],<span class="hljs-keyword">data</span>.y[<span class="hljs-keyword">data</span>.label == <span class="hljs-number">3</span>],color = <span class="hljs-string">"black"</span>) plt.show() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___69_0.png" alt /></p>
<h1 id="other-contenthttpswwwkagglecombulentsiyahmachine-learning-exerciseother-content">Other Content<a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Other-Content"></a></h1>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Natural-Language-Process-(NLP"></a>)</p>
<p>In [57]:</p>
<pre><code><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd # %% <span class="hljs-keyword">import</span> twitter <span class="hljs-keyword">data</span> <span class="hljs-keyword">data</span> = pd.read_csv(<span class="hljs-string">"../input/natural-language-process-nlp/gender-classifier.csv"</span>,encoding = <span class="hljs-string">"latin1"</span>) <span class="hljs-keyword">data</span> = pd.concat([<span class="hljs-keyword">data</span>.gender,<span class="hljs-keyword">data</span>.description],axis=<span class="hljs-number">1</span>) <span class="hljs-keyword">data</span>.dropna(axis = <span class="hljs-number">0</span>,inplace = True) <span class="hljs-keyword">data</span>.gender = [<span class="hljs-number">1</span> <span class="hljs-keyword">if</span> each == <span class="hljs-string">"female"</span> <span class="hljs-keyword">else</span> <span class="hljs-number">0</span> <span class="hljs-keyword">for</span> each <span class="hljs-keyword">in</span> <span class="hljs-keyword">data</span>.gender] print(<span class="hljs-keyword">data</span>.info()) print(<span class="hljs-keyword">data</span>.head()) #print(<span class="hljs-keyword">data</span>.describe()) 

&lt;<span class="hljs-class"><span class="hljs-keyword">class</span> '<span class="hljs-title">pandas</span>.<span class="hljs-title">core</span>.<span class="hljs-title">frame</span>.<span class="hljs-title">DataFrame</span>'&gt; <span class="hljs-title">Int64Index</span>: <span class="hljs-type">16224 entries</span>, <span class="hljs-type">0 to 20049 Data columns </span></span>(total <span class="hljs-number">2</span> columns): gender <span class="hljs-number">16224</span> non-<span class="hljs-literal">null</span> int64 description <span class="hljs-number">16224</span> non-<span class="hljs-literal">null</span> <span class="hljs-keyword">object</span> dtypes: int64(<span class="hljs-number">1</span>), <span class="hljs-keyword">object</span>(<span class="hljs-number">1</span>) memory usage: <span class="hljs-number">380.2</span>+ KB None gender description <span class="hljs-number">0</span> <span class="hljs-number">0</span> i sing my own rhythm. <span class="hljs-number">1</span> <span class="hljs-number">0</span> I<span class="hljs-string">'m the author of novels filled with family dr... 2 0 louis whining and squealing and all 3 0 Mobile guy. 49ers, Shazam, Google, Kleiner Pe... 4 1 Ricky Wilson The Best FRONTMAN/Kaiser Chiefs T... </span>
</code></pre><p>In [58]:</p>
<pre><code><span class="hljs-keyword">import</span> nltk # <span class="hljs-keyword">natural</span> <span class="hljs-keyword">language</span> tool kit #nltk.download("stopwords") # corpus diye bir kalsore indiriliyor <span class="hljs-keyword">from</span> nltk.corpus <span class="hljs-keyword">import</span> stopwords # sonra ben corpus klasorunden <span class="hljs-keyword">import</span> ediyorum <span class="hljs-keyword">import</span> re description_list = [] <span class="hljs-keyword">for</span> description <span class="hljs-keyword">in</span> data.description: description = re.sub("[^a-zA-Z]"," ",description) # regular expression RE mesela "[^a-zA-Z]" description = description.lower() # buyuk harftan kucuk harfe cevirme description = nltk.word_tokenize(description)# split kullanırsak "shouldn't " gibi kelimeler "should" ve "not" diye ikiye ayrılmaz ama word_tokenize() kullanirsak ayrilir description = [ word <span class="hljs-keyword">for</span> word <span class="hljs-keyword">in</span> description <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> word <span class="hljs-keyword">in</span> <span class="hljs-keyword">set</span>(stopwords.words("english"))] # greksiz kelimeleri cikar lemma = nltk.WordNetLemmatizer() # lemmatazation loved =&gt; love gitmeyecegim = &gt; git description = [ lemma.lemmatize(word) <span class="hljs-keyword">for</span> word <span class="hljs-keyword">in</span> description] description = " ".<span class="hljs-keyword">join</span>(description) description_list.append(description) #print(description_list) 
</code></pre><p>In [59]:</p>
<pre><code># %% bag <span class="hljs-keyword">of</span> words <span class="hljs-keyword">from</span> sklearn.feature_extraction.text <span class="hljs-keyword">import</span> CountVectorizer # bag <span class="hljs-keyword">of</span> words yaratmak icin kullandigim metot max_features = <span class="hljs-number">5000</span> count_vectorizer = CountVectorizer(max_features=max_features,stop_words = "english") sparce_matrix = count_vectorizer.fit_transform(description_list).toarray() # x #print("en sik kullanilan {} kelimeler: {}".format(max_features,count_vectorizer.get_feature_names())) 
</code></pre><p>In [60]:</p>
<pre><code># %% y = data.iloc[:,<span class="hljs-number">0</span>].<span class="hljs-keyword">values</span> # male <span class="hljs-keyword">or</span> female classes x = sparce_matrix # train test split <span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> train_test_split x_train, x_test, y_train, y_test = train_test_split(x,y, test_size = <span class="hljs-number">0.1</span>, random_state = <span class="hljs-number">42</span>) 
</code></pre><p>In [61]:</p>
<pre><code># %% naive bayes <span class="hljs-keyword">from</span> sklearn.naive_bayes <span class="hljs-keyword">import</span> GaussianNB nb = GaussianNB() nb.fit(x_train,y_train) #%% prediction y_pred = nb.predict(x_test) print("accuracy: ",nb.score(y_pred.reshape(<span class="hljs-number">-1</span>,<span class="hljs-number">1</span>),y_test)) 

accuracy: <span class="hljs-number">0.48120764017252005</span> 
</code></pre><p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Principal-Component-Analysis-(PCA"></a>)</p>
<p>In [62]:</p>
<pre><code><span class="hljs-attribute">from</span> sklearn.datasets import load_iris import pandas as pd # %% iris = load_iris() feature_names = iris.feature_names y = iris.target data = pd.DataFrame(iris.data,columns = feature_names) data[<span class="hljs-string">"sinif"</span>] = y x = iris.data print(data.info()) print(data.head()) #print(data.describe()) 

<span class="hljs-section">&lt;class 'pandas.core.frame.DataFrame'&gt;</span> <span class="hljs-attribute">RangeIndex</span>: <span class="hljs-number">150</span> entries, <span class="hljs-number">0</span> to <span class="hljs-number">149</span> Data columns (total <span class="hljs-number">5</span> columns): sepal length (cm) <span class="hljs-number">150</span> non-null float<span class="hljs-number">64</span> sepal width (cm) <span class="hljs-number">150</span> non-null float<span class="hljs-number">64</span> petal length (cm) <span class="hljs-number">150</span> non-null float<span class="hljs-number">64</span> petal width (cm) <span class="hljs-number">150</span> non-null float<span class="hljs-number">64</span> sinif <span class="hljs-number">150</span> non-null int<span class="hljs-number">64</span> dtypes: float<span class="hljs-number">64</span>(<span class="hljs-number">4</span>), int<span class="hljs-number">64</span>(<span class="hljs-number">1</span>) memory usage: <span class="hljs-number">5</span>.<span class="hljs-number">9</span> KB None sepal length (cm) sepal width (cm) ... petal width (cm) sinif <span class="hljs-number">0</span> <span class="hljs-number">5</span>.<span class="hljs-number">1</span> <span class="hljs-number">3</span>.<span class="hljs-number">5</span> ... <span class="hljs-number">0</span>.<span class="hljs-number">2</span> <span class="hljs-number">0</span> <span class="hljs-number">1</span> <span class="hljs-number">4</span>.<span class="hljs-number">9</span> <span class="hljs-number">3</span>.<span class="hljs-number">0</span> ... <span class="hljs-number">0</span>.<span class="hljs-number">2</span> <span class="hljs-number">0</span> <span class="hljs-number">2</span> <span class="hljs-number">4</span>.<span class="hljs-number">7</span> <span class="hljs-number">3</span>.<span class="hljs-number">2</span> ... <span class="hljs-number">0</span>.<span class="hljs-number">2</span> <span class="hljs-number">0</span> <span class="hljs-number">3</span> <span class="hljs-number">4</span>.<span class="hljs-number">6</span> <span class="hljs-number">3</span>.<span class="hljs-number">1</span> ... <span class="hljs-number">0</span>.<span class="hljs-number">2</span> <span class="hljs-number">0</span> <span class="hljs-number">4</span> <span class="hljs-number">5</span>.<span class="hljs-number">0</span> <span class="hljs-number">3</span>.<span class="hljs-number">6</span> ... <span class="hljs-number">0</span>.<span class="hljs-number">2</span> <span class="hljs-number">0</span><span class="hljs-meta"> [5 rows x 5 columns] </span>
</code></pre><p>In [63]:</p>
<pre><code>#%% PCA <span class="hljs-keyword">from</span> sklearn.decomposition <span class="hljs-keyword">import</span> PCA pca = PCA(n_components = <span class="hljs-number">2</span>, whiten= <span class="hljs-keyword">True</span> ) # whitten = normalize pca.fit(x) x_pca = pca.<span class="hljs-keyword">transform</span>(x) print("variance ratio: ", pca.explained_variance_ratio_) print("sum: ",sum(pca.explained_variance_ratio_)) 

variance ratio: [<span class="hljs-number">0.92461872</span> <span class="hljs-number">0.05306648</span>] sum: <span class="hljs-number">0.977685206318795</span> 
</code></pre><p>In [64]:</p>
<pre><code>#%% <span class="hljs-number">2</span>D data["p1"] = x_pca[:,<span class="hljs-number">0</span>] data["p2"] = x_pca[:,<span class="hljs-number">1</span>] color = ["red","green","blue"] <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-keyword">for</span> <span class="hljs-keyword">each</span> <span class="hljs-keyword">in</span> range(<span class="hljs-number">3</span>): plt.scatter(data.p1[data.sinif == <span class="hljs-keyword">each</span>],data.p2[data.sinif == <span class="hljs-keyword">each</span>],color = color[<span class="hljs-keyword">each</span>],label = iris.target_names[<span class="hljs-keyword">each</span>]) plt.legend() plt.xlabel("p1") plt.ylabel("p2") plt.<span class="hljs-keyword">show</span>() 
</code></pre><p><img src="https://www.kaggleusercontent.com/kf/34033834/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..GcCQgJBt7dxTmsCCA9vGoQ.CndvAOwVlrONHiCbO2jYRhCWTb8PBN4Euv63ENSbnArI4jHvvzABDqa06U8aDAWIxnsmPt8LbzpcxKUWbASiw-KnW0UNMznpqF54q2F8VyQYSew0XqOlrTHi0iuUoxk3CDQyIBwQ-618sewcWyy9YSx5xJs8f1wcDnXqsjBr373IJ9iU_MMb3SyttL7zjGvpegpCuj8bZQMk9e1DnEpp03WaBs6rpyX4ZhMTdEUE_OJhid-u9HB1-antIG2PDc-U5iekRMDhwkROdmM2jA0ZEYolemwF-egh0euDprPloWzl7M_PizUZEx0IYFYZkl3KfDUiLQ4KttPHriGnHtenyEwYnuotdJXDKJyaF417Vj8E08W0RlPmw9s8jNBU45uTea7WBmOy_ATsZcLEspCuPI81bJ26OBhJp1P7LLslbcBJClp-CSTepT5jwVa3tgPtsXkstf3LzcHTKkd794BY9IIolMoUvuVt5_2XaQYNaY-B-3vgV7Ainl43xintpHLjMsByOGRYOAbryB38oDltHTrNgGkJgARloehF_A5_w7tullz6l7PsyAh_rWGOMYE0ua8AVZuw1n0Zug7ieTqP2K_bVT-4yAlYoMywOcxxmfIJHqqiDBV3MKYatQfYyfQMuvOKfZKqYpP7EA7sQYIO2A.Dofwt8yquZJZBlBU1EzNZg/__results___files/__results___79_0.png" alt /></p>
<p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Model-Selection"></a></p>
<p>In [65]:</p>
<pre><code><span class="hljs-attribute">from</span> sklearn.datasets import load_iris import pandas as pd import numpy as np #%% iris = load_iris() x = iris.data y = iris.target data = pd.DataFrame(iris.data,columns = feature_names) data[<span class="hljs-string">"sinif"</span>] = y print(data.info()) print(data.head()) #print(data.describe()) # %% normalization x = (x-np.min(x))/(np.max(x)-np.min(x)) 

<span class="hljs-section">&lt;class 'pandas.core.frame.DataFrame'&gt;</span> <span class="hljs-attribute">RangeIndex</span>: <span class="hljs-number">150</span> entries, <span class="hljs-number">0</span> to <span class="hljs-number">149</span> Data columns (total <span class="hljs-number">5</span> columns): sepal length (cm) <span class="hljs-number">150</span> non-null float<span class="hljs-number">64</span> sepal width (cm) <span class="hljs-number">150</span> non-null float<span class="hljs-number">64</span> petal length (cm) <span class="hljs-number">150</span> non-null float<span class="hljs-number">64</span> petal width (cm) <span class="hljs-number">150</span> non-null float<span class="hljs-number">64</span> sinif <span class="hljs-number">150</span> non-null int<span class="hljs-number">64</span> dtypes: float<span class="hljs-number">64</span>(<span class="hljs-number">4</span>), int<span class="hljs-number">64</span>(<span class="hljs-number">1</span>) memory usage: <span class="hljs-number">5</span>.<span class="hljs-number">9</span> KB None sepal length (cm) sepal width (cm) ... petal width (cm) sinif <span class="hljs-number">0</span> <span class="hljs-number">5</span>.<span class="hljs-number">1</span> <span class="hljs-number">3</span>.<span class="hljs-number">5</span> ... <span class="hljs-number">0</span>.<span class="hljs-number">2</span> <span class="hljs-number">0</span> <span class="hljs-number">1</span> <span class="hljs-number">4</span>.<span class="hljs-number">9</span> <span class="hljs-number">3</span>.<span class="hljs-number">0</span> ... <span class="hljs-number">0</span>.<span class="hljs-number">2</span> <span class="hljs-number">0</span> <span class="hljs-number">2</span> <span class="hljs-number">4</span>.<span class="hljs-number">7</span> <span class="hljs-number">3</span>.<span class="hljs-number">2</span> ... <span class="hljs-number">0</span>.<span class="hljs-number">2</span> <span class="hljs-number">0</span> <span class="hljs-number">3</span> <span class="hljs-number">4</span>.<span class="hljs-number">6</span> <span class="hljs-number">3</span>.<span class="hljs-number">1</span> ... <span class="hljs-number">0</span>.<span class="hljs-number">2</span> <span class="hljs-number">0</span> <span class="hljs-number">4</span> <span class="hljs-number">5</span>.<span class="hljs-number">0</span> <span class="hljs-number">3</span>.<span class="hljs-number">6</span> ... <span class="hljs-number">0</span>.<span class="hljs-number">2</span> <span class="hljs-number">0</span><span class="hljs-meta"> [5 rows x 5 columns] </span>
</code></pre><p>In [66]:</p>
<pre><code># %% train test split <span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> train_test_split x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = <span class="hljs-number">0.3</span>) # knn model <span class="hljs-keyword">from</span> sklearn.neighbors <span class="hljs-keyword">import</span> KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors = <span class="hljs-number">13</span>) # n_neighbors = k # %% K fold CV K = <span class="hljs-number">10</span> <span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> cross_val_score accuracies = cross_val_score(estimator = knn, X = x_train, y= y_train, cv = <span class="hljs-number">10</span>) print("average accuracy: ",np.mean(accuracies)) print("average std: ",np.std(accuracies)) knn.fit(x_train,y_train) print("test accuracy: ",knn.score(x_test,y_test)) 

average accuracy: <span class="hljs-number">0.9805555555555555</span> average std: <span class="hljs-number">0.03938179688543842</span> test accuracy: <span class="hljs-number">0.9555555555555556</span> 
</code></pre><p>In [67]:</p>
<pre><code>#Model Selection grid search cross validation <span class="hljs-keyword">for</span> knn from sklearn.model_selection <span class="hljs-keyword">import</span> GridSearchCV grid = {<span class="hljs-string">"n_neighbors"</span>:np.arange(<span class="hljs-number">1</span>,<span class="hljs-number">50</span>)} knn= KNeighborsClassifier() knn_cv = GridSearchCV(knn, grid, cv = <span class="hljs-number">10</span>) # GridSearchCV knn_cv.fit(x,y) #%% <span class="hljs-built_in">print</span> hyperparameter KNN algoritmasindaki K degeri <span class="hljs-built_in">print</span>(<span class="hljs-string">"tuned hyperparameter K: "</span>,knn_cv.best_params_) <span class="hljs-built_in">print</span>(<span class="hljs-string">"tuned parametreye gore en iyi accuracy (best score): "</span>,knn_cv.best_score_) 

tuned hyperparameter K: {<span class="hljs-string">'n_neighbors'</span>: <span class="hljs-number">13</span>} tuned parametreye gore en iyi accuracy (best score): <span class="hljs-number">0.98</span> 
</code></pre><p>In [68]:</p>
<pre><code>#Model Selection Grid search CV <span class="hljs-keyword">with</span> logistic regression x = x[:<span class="hljs-number">100</span>,:] y = y[:<span class="hljs-number">100</span>] from sklearn.linear_model <span class="hljs-keyword">import</span> LogisticRegression grid = {<span class="hljs-string">"C"</span>:np.logspace(<span class="hljs-number">-3</span>,<span class="hljs-number">3</span>,<span class="hljs-number">7</span>),<span class="hljs-string">"penalty"</span>:[<span class="hljs-string">"l1"</span>,<span class="hljs-string">"l2"</span>]} # l1 = lasso ve l2 = ridge logreg = LogisticRegression() logreg_cv = GridSearchCV(logreg,grid,cv = <span class="hljs-number">10</span>) logreg_cv.fit(x,y) <span class="hljs-built_in">print</span>(<span class="hljs-string">"tuned hyperparameters: (best parameters): "</span>,logreg_cv.best_params_) <span class="hljs-built_in">print</span>(<span class="hljs-string">"accuracy: "</span>,logreg_cv.best_score_) 

tuned hyperparameters: (best parameters): {<span class="hljs-string">'C'</span>: <span class="hljs-number">0.1</span>, <span class="hljs-string">'penalty'</span>: <span class="hljs-string">'l2'</span>} accuracy: <span class="hljs-number">1.0</span> 
</code></pre><p><a target="_blank" href="https://www.kaggle.com/bulentsiyah/machine-learning-exercise#Recommendation-Systems"></a></p>
<p>In [69]:</p>
<pre><code><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd <span class="hljs-keyword">import</span> os print(os.listdir("../input/movielens-20m-dataset/")) # <span class="hljs-keyword">import</span> movie data <span class="hljs-keyword">set</span> <span class="hljs-keyword">and</span> look at <span class="hljs-keyword">columns</span> movie = pd.read_csv("../input/movielens-20m-dataset/movie.csv") print(movie.<span class="hljs-keyword">columns</span>) movie = movie.loc[:,["movieId","title"]] movie.head(<span class="hljs-number">10</span>) 

[<span class="hljs-string">'link.csv'</span>, <span class="hljs-string">'genome_tags.csv'</span>, <span class="hljs-string">'movie.csv'</span>, <span class="hljs-string">'genome_scores.csv'</span>, <span class="hljs-string">'tag.csv'</span>, <span class="hljs-string">'rating.csv'</span>] <span class="hljs-keyword">Index</span>([<span class="hljs-string">'movieId'</span>, <span class="hljs-string">'title'</span>, <span class="hljs-string">'genres'</span>], dtype=<span class="hljs-string">'object'</span>) 
</code></pre><p>In [70]:</p>
<pre><code># <span class="hljs-keyword">import</span> rating data <span class="hljs-keyword">and</span> look at columsn rating = pd.read_csv("../input/movielens-20m-dataset/rating.csv") print(rating.<span class="hljs-keyword">columns</span>) # what we need <span class="hljs-keyword">is</span> that <span class="hljs-keyword">user</span> id, movie id <span class="hljs-keyword">and</span> rating rating = rating.loc[:,["userId","movieId","rating"]] rating.head(<span class="hljs-number">10</span>) 

<span class="hljs-keyword">Index</span>([<span class="hljs-string">'userId'</span>, <span class="hljs-string">'movieId'</span>, <span class="hljs-string">'rating'</span>, <span class="hljs-string">'timestamp'</span>], dtype=<span class="hljs-string">'object'</span>) 
</code></pre><p>In [71]:</p>
<pre><code># <span class="hljs-keyword">then</span> merge movie <span class="hljs-keyword">and</span> rating data data = pd.merge(movie,rating) # now lets look at our data data.head(<span class="hljs-number">10</span>) print(data.shape) data = data.iloc[:<span class="hljs-number">1000000</span>,:] # lets make a pivot <span class="hljs-keyword">table</span> <span class="hljs-keyword">in</span> <span class="hljs-keyword">order</span> <span class="hljs-keyword">to</span> make <span class="hljs-keyword">rows</span> are users <span class="hljs-keyword">and</span> <span class="hljs-keyword">columns</span> are movies. <span class="hljs-keyword">And</span> <span class="hljs-keyword">values</span> are rating pivot_table = data.pivot_table(<span class="hljs-keyword">index</span> = ["userId"],<span class="hljs-keyword">columns</span> = ["title"],<span class="hljs-keyword">values</span> = "rating") pivot_table.head(<span class="hljs-number">10</span>) 

(<span class="hljs-number">20000263</span>, <span class="hljs-number">4</span>) 
</code></pre><p>In [72]:</p>
<pre><code>movie_watched = pivot_table["Bad Boys (1995)"] similarity_with_other_movies = pivot_table.corrwith(movie_watched) # find correlation <span class="hljs-keyword">between</span> "Bad Boys (1995)" <span class="hljs-keyword">and</span> other movies similarity_with_other_movies = similarity_with_other_movies.sort_values(ascending=<span class="hljs-keyword">False</span>) similarity_with_other_movies.head() 
</code></pre>]]></content:encoded></item><item><title><![CDATA[Data Science and Visualization Exercise]]></title><description><![CDATA[It is the kernel that I have tried and compiled from the courses of DATAI Team (Language of the courses is Turkish: Data Science ve Python: Sıfırdan Uzmanlığa Veri Bilimi (2) and Data Visualization: A'dan Z'ye Veri Görselleştirme (3)), which is Grand...]]></description><link>https://www.bulentsiyah.com/data-science-ve-data-visualization-egzersizleri-kaggle</link><guid isPermaLink="true">https://www.bulentsiyah.com/data-science-ve-data-visualization-egzersizleri-kaggle</guid><category><![CDATA[#data visualisation]]></category><category><![CDATA[Python]]></category><category><![CDATA[Tutorial]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Thu, 09 Aug 2018 15:04:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1611500792668/gxNEi05eb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It is the kernel that I have tried and compiled from the courses of <a target="_blank" href="https://www.udemy.com/user/datai-team/">DATAI Team</a> (Language of the courses is Turkish: <a target="_blank" href="https://www.udemy.com/data-science-sfrdan-uzmanlga-veri-bilimi-2/">Data Science ve Python: Sıfırdan Uzmanlığa Veri Bilimi (2)</a> and <a target="_blank" href="https://www.udemy.com/data-visualization-adan-zye-veri-gorsellestirme-3/">Data Visualization: A'dan Z'ye Veri Görselleştirme (3)</a>), which is <a target="_blank" href="https://www.kaggle.com/kanncaa1">Grandmaster on Kaggle</a> and has more than 15 courses on Udemy.</p>
<p><img src="https://iili.io/J11aa9.png" alt /></p>
<h1 id="contenthttpswwwkagglecombulentsiyahdata-science-and-visualization-exercisecontent"><strong>Content</strong><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#Content"></a></h1>
<h2 id="cleaning-datahttpswwwkagglecombulentsiyahdata-science-and-visualization-exercisecleaning-data">Cleaning Data<a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#Cleaning-Data"></a></h2>
<ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#1.">Diagnose data for cleaning</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#2.">Exploratory data analysis (EDA)</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#3.">Visual exploratory data analysis</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#4.">Tidy data</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#5.">Pivoting data</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#6.">Concatenating data</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#7.">Data types</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#8.">Missing data and testing with assert</a></li>
</ul>
<h2 id="manipulating-data-frames-with-pandashttpswwwkagglecombulentsiyahdata-science-and-visualization-exercisemanipulating-data-frames-with-pandas">Manipulating Data Frames with Pandas<a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#Manipulating-Data-Frames-with-Pandas"></a></h2>
<ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#9.">Index objects and labeled data</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#10.">Hierarchical indexing</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#11.">Pivoting data frames</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#12.">Stacking and unstacking data frames</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#13.">Melting data frames</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#14.">Categoricals and groupby</a></li>
</ul>
<h2 id="seabornhttpswwwkagglecombulentsiyahdata-science-and-visualization-exerciseseaborn">Seaborn<a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#Seaborn"></a></h2>
<ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#15.">Bar Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#16.">Point Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#17.">Joint Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#18.">Pie Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#19.">Lm Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#20.">Kde Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#21.">Violin Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#22.">Heatmap</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#23.">Box Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#24.">Swarm Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#25.">Pair Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#26.">Count Plot</a></li>
</ul>
<h2 id="plotlyhttpswwwkagglecombulentsiyahdata-science-and-visualization-exerciseplotly">Plotly<a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#Plotly"></a></h2>
<ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#27.">Line Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#28.">Scatter Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#29.">Bar Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#30.">Pie Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#31.">Bubble Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#32.">Histogram</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#33.">Word Cloud</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#34.">Box Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#35.">Scatter Plot Matrix</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#36.">Inset Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#37.">3D Scatter Plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#38.">Multiple Subplots</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#39.">Animation Plot</a></li>
</ul>
<h2 id="visualization-toolshttpswwwkagglecombulentsiyahdata-science-and-visualization-exercisevisualization-tools">Visualization Tools<a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#Visualization-Tools"></a></h2>
<ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#40.">Parallel Plots (Pandas)</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#41.">Network Charts (networkx)</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#42.">Venn Diagram (matplotlib)</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#43.">Donut Plot (matplotlib)</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#44.">Spyder Chart (matplotlib)</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise#45.">Cluster Map (seaborn)</a></li>
</ul>
<p> <a target="_blank" href="https://www.kaggle.com/bulentsiyah/data-science-and-visualization-exercise">You can check my Kaggle profile for details and codes</a> </p>
]]></content:encoded></item><item><title><![CDATA[Python Exercise]]></title><description><![CDATA[You can check my Kaggle profile for details and codes 
It is the kernel that I have tried and compiled from the courses of DATAI Team (Language of the courses is Turkish: Python: Sıfırdan Uzmanlığa Programlama (1)), which is Grandmaster on Kaggle and...]]></description><link>https://www.bulentsiyah.com/python-egzersizleri-kaggle</link><guid isPermaLink="true">https://www.bulentsiyah.com/python-egzersizleri-kaggle</guid><category><![CDATA[Python]]></category><category><![CDATA[python beginner]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Tue, 07 Aug 2018 15:10:05 GMT</pubDate><content:encoded><![CDATA[<p> <a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise">You can check my Kaggle profile for details and codes</a> </p>
<p>It is the kernel that I have tried and compiled from the courses of <a target="_blank" href="https://www.udemy.com/user/datai-team/">DATAI Team</a> (Language of the courses is Turkish: <a target="_blank" href="https://www.udemy.com/python-sfrdan-uzmanlga-programlama-1/">Python: Sıfırdan Uzmanlığa Programlama (1)</a>), which is <a target="_blank" href="https://www.kaggle.com/kanncaa1">Grandmaster on Kaggle</a> and has more than 15 courses on Udemy.</p>
<h1 id="contenthttpswwwkagglecombulentsiyahpython-exercisecontent"><strong>Content</strong><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#Content"></a></h1>
<ol>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#1.">Python Basics</a><ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#2.">variable</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#3.">user defined functions</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#4.">default ve flexible functions</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#5.">lambda function</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#6.">nested function</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#7.">anonymous function</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#8.">list</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#9.">tuple</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#10.">dictionary</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#11.">conditionals</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#12.">loops</a></li>
</ul>
</li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#13.">Object Oriented Programming</a><ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#14.">class</a></li>
</ul>
</li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#15.">Numpy</a><ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#16.">basic operations</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#17.">indexing and slicing</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#18.">shape manipulation</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#19.">convert and copy</a></li>
</ul>
</li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#20.">Pandas</a><ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#21.">indexing and slicing</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#22.">filtering</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#23.">list comprehension</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#24.">drop and concatenating</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#25.">transforming data</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#26.">iteration example</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#27.">zip example</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#28.">example of list comprehension</a></li>
</ul>
</li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#29.">Visualization with Matplotlib</a><ul>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#30.">line Plot example</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#31.">scatter plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#32.">histogram</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#33.">bar plot</a></li>
<li><a target="_blank" href="https://www.kaggle.com/bulentsiyah/python-exercise#34.">subplots</a></li>
</ul>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Tavlama Benzetimi ve Tabu Arama Algoritmaları ile Gezgin Satıcı Problemi (C#)]]></title><description><![CDATA[Tavlama Benzetimi ve Tabu Arama Algoritmaları ile Gezgin Satıcı Problemi (C#)

Gezgin Satıcı Problemi
GSP, n adet şehir arasındaki mesafelerin bilindiği durumda, şehirlerin her birine yalnız bir kez uğramak şartıyla, başlangıç noktasına geri dönülmes...]]></description><link>https://www.bulentsiyah.com/tavlama-benzetimi-ve-tabu-arama-algoritmalari-ile-gezgin-satici-problemi-c</link><guid isPermaLink="true">https://www.bulentsiyah.com/tavlama-benzetimi-ve-tabu-arama-algoritmalari-ile-gezgin-satici-problemi-c</guid><category><![CDATA[C#]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Sun, 31 Dec 2017 15:13:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1611501776775/tQ9qsHIDq.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="tavlama-benzetimi-ve-tabu-arama-algoritmalari-ile-gezgin-satici-problemi-c">Tavlama Benzetimi ve Tabu Arama Algoritmaları ile Gezgin Satıcı Problemi (C#)</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611501695291/wjJrni3Mq.jpeg" alt="tabuarama_tavlamabenzetimi.jpg" /></p>
<h2 id="gezgin-satici-problemi">Gezgin Satıcı Problemi</h2>
<p>GSP, n adet şehir arasındaki mesafelerin bilindiği durumda, şehirlerin her birine yalnız bir kez uğramak şartıyla, başlangıç noktasına geri dönülmesi esasına dayalı, tur boyunca kat edilen toplam yolun en kısa olduğu şehir sıralamasının (optimal rota) bulunmasının amaçlandığı bir problemdir. Dağıtım, rotalama, kuruluş yeri belirleme, planlama, lojistik gibi problemlerde geniş bir uygulama alanına sahip olan gezgin satıcı problemi, aynı zamanda optimizasyon alanında, araştırmacılar tarafından üzerinde uzun yıllardır çalışmalar yapılan NP-hard (çözümü zor) sınıfında yer alan bir problemdir.</p>
<h2 id="tavlama-benzetimi-algoritmasi-tb">Tavlama Benzetimi Algoritması (TB)</h2>
<p>TB algoritması, ilk olarak 1983 yılında Kirkpatrick, Gelatt ve Vecchi  tarafından sunulmuş olup, optimizasyon problemlerinin çözümü için geliştirilmiş bir yerel arama algoritmasıdır.</p>
<p>TB algoritması adını erimiş metalin soğutulması işlemi olan, tavlama işleminden almaktadır. Bu işlemde metalik yapıdaki kusurları azaltmak için bir materyal ısıtılır, daha büyük kristal boyuta ve minimum enerji ile katı kristal duruma yavaşça soğutulur. Tavlama işlemi, sıcaklığın ve soğuma katsayısının dikkatlice kontrolünü gerektirir. Tavlama işlemi sonucunda oluşan kristalleşme, metalin mekanik özelliklerini iyileştiren moleküler yapısındaki değişikliklerle oluşmaktadır. Tavlama işlemindeki ısının davranışı, optimizasyondaki kontrol parametresiyle aynı gibi görülür. Isının, daha iyi sonuçlara doğru algoritmaya rehberlik eden bir rolü vardır. Bu durum ancak kontrollü bir tutum içinde, ısının kademeli olarak düşürülmesiyle yapılabilir. Eğer ısı aniden düşürülürse, algoritma lokal minimum ile durur.</p>
<p>TB algoritması; birçok değişkene sahip fonksiyonların maksimum veya minimum değerlerinin bulunması için, özellikle de<br />birçok yerel minimuma sahip doğrusal olmayan fonksiyonların minimum değerlerinin bulunması için tasarlanmıştır.<br />Tavlama benzetimi algoritmasında; sıcaklık, sıcaklığın düşürülmesi, tekrar işlemi (döngü) gibi parametreler vardır. Algoritmada bir başlangıç aday çözümü belirlenip, bu çözüm başlangıçta en iyi çözüm olarak kabul edilir. Başlangıç aday çözümünün kopyasına "tweak" işlemi (başlangıç aday çözümüne ufak bir değişiklik yaparak yeni bir çözüm<br />üretme işlemi) uygulanır. Daha sonra ki adımlar da ise en iyi çözüm kabul edilen başlangıç aday çözümü ile yeni üretilen çözümün hangisinin daha iyi olduğu (kaliteleri) veya 0-1 arasında üretilen bir rassal sayının &lt; 𝑒 ((Kalite(Y)− Kalite(B))/sıcaklık) durumu araştırılır. Eğer yeni sonuç daha iyi ise en iyi çözüm olarak yeni çözüm atanır. Sıcaklık<br />parametresi, tavlama yönteminde başlangıçta belirlenen bir değerdir. Bu değer, oluşturulan döngünün sonunda, belirli bir oranda (sıcaklığın düşürülme parametresi kadar) azaltılır. Bu işlemler en iyi çözüm bulunana kadar veya algoritmanın çalıştırılması için belirlenen bir süre bitene kadar veya sıcaklık parametresi sıfır veya sıfırdan küçük olana kadar sürdürülebilir.</p>
<h2 id="tabu-arama-algoritmasi-ta">Tabu Arama Algoritması (TA)</h2>
<p>Glover tarafından ortaya atılan ve Hansen tarafından türetilmiş versiyonları bulunan TA algoritması, temelde son çözüme götüren adımın dairesel hareketler oluşturmasını engellemek için cezalandırılarak bir sonraki döngüde tekrar edilmesinin yasaklanması üzerine kurgulanmıştır.</p>
<p>TA algoritması döngü ya da çalışma süresi boyunca üretilen çözümler ile ilgili bilgileri saklamak üzere tasarlanmış dinamik bir hafızaya sahiptir. Tabu listesi olarak da adlandırılan bu hafızada saklanan bilgiler, araştırma uzayında yeni çözüm kümelerinin oluşturulması için kullanılır. TA algoritması, mevcut çözümlerden küçük bir değişim ile (tweak) elde ettiği denenmemiş bir çözüm kümesi üreterek optimizasyona<br />başlamaktadır. TA algoritmasında, çözüm uzayında bir yerel minimum noktada takılmayı engelleyebilmek için oluşturulan yeni çözüme, o anki çözümden daha kötü olsa bile müsaade edilmektedir. Ancak kötü çözüme izin verilmesi algoritmayı bir kısır döngü içerisine sokabilecektir. Algoritmanın kısır döngüye girmesine engel olmak için, bir tabu listesi oluşturulur ve o anki çözüme uygulanmasına izin verilmeyen tüm yasaklı hareketler tabu listesinde saklanır.</p>
<p>Bir hareketin tabu listesine alınıp alınmayacağını belirlemek için, tabu kısıtlamaları adı verilen bazı kriterler kullanılmaktadır. Tabu listesinin kullanılması, belirli bir iterasyon sayısınca daha önce denenmiş çözümlerin tekrar edilmesini engellediği için arama<br />esnasında bir bölgede takılma ihtimalini azaltmaktadır. TA, mümkün bir çözüm ile başlar. Bu çözüm, problemin matematiksel ifadesinde geçen kısıtları tatmin eden bir çözümdür. Tabu aramanın performansı başlangıç çözümüne bağlıdır. Bu nedenle mümkün olduğunca iyi olan bir çözüm ile başlamak gerekir. Tabu listesi ilk giren ilk çıkar (first in first out = FIFO) mantığında çalışan bir listedir. Algoritmada belirlenen karakteristik özelliklere göre tabu listesi sürekli olarak yukarıdan doldurulmaya başlar. Bulunan elemanların sayısı liste uzunluğunu aştığında yeni gelen elemanın listeye eklenmesi için listenin en sonundaki eleman listeden çıkarılır.</p>
<p>Konuyla ilgili yapılan çalışmanın kodları Github'dan inceleyebilirsiniz. Kodlar arasında yorum satırlarıyla konu hakkında fikir sahibiyseniz anlaşılır olacaktık.</p>
<p>Github Kodu: <a target="_blank" href="https://github.com/bulentsiyah/Tavlama-Benzetimi-ve-Tabu-Arama-Algoritmalari-ile-Gezgin-Satici-Problemi">https://github.com/bulentsiyah/Tavlama-Benzetimi-ve-Tabu-Arama-Algoritmalari-ile-Gezgin-Satici-Problemi</a></p>
]]></content:encoded></item><item><title><![CDATA[Android Jiroskop Sensörü ile Hareket Tanıma (Gesture recognition with Android (Gyroscope Sensor))]]></title><description><![CDATA[Uygulama ile jiroskop sensörü sayesinde yapılan hareketlerin birbirine benzerlikleri kıyaslanıp belirli bir eşiği aşan (Korelasyon seviyeleri her 3 boyutta göre) hareketlerin birbirinin aynısı olduğuna karar veriliyor.
Bu kıyaslama için korelasyon me...]]></description><link>https://www.bulentsiyah.com/android-jiroskop-sensoru-ile-hareket-tanima-gesture-recognition-with-android-gyroscope-sensor</link><guid isPermaLink="true">https://www.bulentsiyah.com/android-jiroskop-sensoru-ile-hareket-tanima-gesture-recognition-with-android-gyroscope-sensor</guid><category><![CDATA[android app development]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Tue, 20 Sep 2016 15:28:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1611502145351/RJjIPRV2L.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Uygulama ile jiroskop sensörü sayesinde yapılan hareketlerin birbirine benzerlikleri kıyaslanıp belirli bir eşiği aşan (Korelasyon seviyeleri her 3 boyutta göre) hareketlerin birbirinin aynısı olduğuna karar veriliyor.</p>
<p>Bu kıyaslama için korelasyon methodu kullanılır. Uygulamada ilk olarak eğitim hareketleri girilerek kıyas yapılacak hareketler oluşturulur. Daha sonra test hareketine başlanır, hareket tamamlandığında uygulamaki tüm eğitim hareketleri ile her düzlemde(x,y,z) korelasyon katsayıları ölçülür. En yüksek ve belirlediğimiz eşiği aşan değere sahip olan eğitim hareketi tanımı, test hareketinin tanımamızı sağlar. Bu hareket harf, şekil veya cümle gibi sözcük öbekleri olabilir. Burada hedef işlem başından sonuna yapılan hareketin daha önceden tanımlanmış bir harekete benzetmektir.</p>
<p>Proje teknik terimler<br />Korelasyon, olasılık kuramı ve istatistikte iki rassal değişken arasındaki doğrusal ilişkinin yönünü ve gücünü belirtir. Genel istatistiksel kullanımda korelasyon, bağımsızlık durumundan ne kadar uzaklaşıldığını gösterir.<br />Korelasyon katsayısı, bağımsız değişkenler arasındaki ilişkinin yönü ve büyüklüğünü belirten katsayıdır. Bu katsayı, (-1) ile (+1) arasında bir değer alır. Pozitif değerler direk yönlü doğrusal ilişkiyi; negatif değerler ise ters yönlü bir doğrusal ilişkiyi belirtir. Korelasyon katsayısı 0 ise söz konusu değişkenler arasında doğrusal bir ilişki yoktur.</p>
<p>Jiroskop, (İngilizce: Gyroscope, Gyro) veya Yalpalık, Cayroskop, Cayro, yön ölçümü veya ayarlamasında kullanılan, açısal dengenin korunması ilkesiyle çalışan bir alet. Jiroskopik hareketin temeli fizik kurallarına ve açısal momentumun korunumu ilkesine dayalıdır.</p>
<p>Github Kodu: <a target="_blank" href="https://github.com/bulentsiyah/Android_Jiroskop_Sensoru_ile_Hareket_Tanima">https://github.com/bulentsiyah/Android_Jiroskop_Sensoru_ile_Hareket_Tanima</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611502015855/LKIpfGyMY.png" alt="GestureRecognitionWithAndroid-1-1.png" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611502066233/sCCQT4Pm5.png" alt="GestureRecognitionWithAndroid-2-1.png" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611502071974/Z1yVy5Ixu.png" alt="GestureRecognitionWithAndroid-3-1.png" /></p>
]]></content:encoded></item><item><title><![CDATA[Genetik Algoritma ile Çift Taraflı Montaj Hattı Dengeleme (Genetic Algorithms, C#)]]></title><description><![CDATA[Projenin Kodları
Tüm projeyi İNDİR: Program Rapor ve Kodu
1. GİRİŞ
Bu çalışmada, çift taraflı mondaj hattının her iki tarafı için belirlenmiş çevrim süresi içerisinde istasyon sayısını en aza indirgeme amaçlanmıştır. Operasyonlarda taraf ve istasyon ...]]></description><link>https://www.bulentsiyah.com/genetik-algoritma-ile-cift-tarafli-montaj-hatti-dengeleme-csharp</link><guid isPermaLink="true">https://www.bulentsiyah.com/genetik-algoritma-ile-cift-tarafli-montaj-hatti-dengeleme-csharp</guid><category><![CDATA[C#]]></category><category><![CDATA[algorithms]]></category><dc:creator><![CDATA[Bulent Siyah]]></dc:creator><pubDate>Tue, 01 Dec 2015 15:33:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1611502881922/4SrXhriAN.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="projenin-kodlari">Projenin Kodları</h2>
<p>Tüm projeyi <a target="_blank" href="https://github.com/bulentsiyah/Genetik-Algoritma-ile-Cift-Tarafli-Montaj-Hatti-Dengeleme">İNDİR: Program Rapor ve Kodu</a></p>
<h3 id="1-giris">1. GİRİŞ</h3>
<p>Bu çalışmada, çift taraflı mondaj hattının her iki tarafı için belirlenmiş çevrim süresi içerisinde istasyon sayısını en aza indirgeme amaçlanmıştır. Operasyonlarda taraf ve istasyon sayısı kısıtları bulunmaktadır. Problemin Genetik Algortima ile çözümü için geliştirilen uygulama, c#(sharp) programlama dili ile yazılmıştır.</p>
<p>Genetik algoritma rastgele arama metodu olduğu için tek bir çözüm aramak yerine bir çözüm kümesi üzerinde çalışır. Optimum çözüme olası çözümlerin bir bölümü üzerinde gidilir. Böylece çalışmadaki sonuçlar her zaman en iyi olmaz. Çalışmada genetik algoritmanın kullanılmasının nedeni, genetik algoritmanın problemin doğasıyla ilgili herhangi bir bilgiye ihtiyaç duymamasıdır.</p>
<h3 id="2-materyal-ve-yontem">2. Materyal ve Yöntem</h3>
<p>Genetik Algoritmalar, daha iyi çözümlere yavaş yavaş bir yaklaşım sağlayan geniş bir problem uzayı boyunca yönlendirilmiş rastgele bir araştırmaya imkan sağlar. Temel avantajı optimize edilmeye çalışılan problemin doğasıyla ilgili herhangi bir bilgiye ihtiyaç göstermemeleridir.</p>
<h4 id="21-genetik-algoritmalarin-temel-kavramlari">2.1. GENETİK ALGORİTMALARIN TEMEL KAVRAMLARI</h4>
<p>Evrimsel hesaplama topluluğunda yerleşik, tek bir Genetik Algoritma tanımı bulunmamakla beraber, Genetik Algoritma sınıfına girdiği kabul edilen yöntemlerde en azından şu ortak unsurlara rastlanır: kromozom topluluğu, uygunluğa göre seçilim, yeni bireyler üretmek için çaprazlama, yeni bireylerin rastgele mutasyonu. Holland'ın GA'nın unsurları içinde saydığı ters çevirme operatörü günümüzde nadiren kullanılmaktadır.</p>
<h5 id="211-kromozom-ve-topluluk">2.1.1. Kromozom ve Topluluk</h5>
<p>GA'da kromozomlar, problem için olası çözümleri temsil ederler. Bu çözümlerden her birine birey adı verilir. Topluluk (popülasyon) kromozomlardan (bireylerden) oluşan kümedir. GA'nın kromozom toplulukları üzerinde işlemler yapması ile, bir önceki topluluklar her seferinde yeni üretilen topluluklarla yer değiştirir.</p>
<h5 id="212-uygunluk">2.1.2. Uygunluk</h5>
<p>Uygunluk değeri, çözümün kalitesini belirler ve uygunluk fonksiyonu kullanılarak hesaplanır. Uygunluk fonksiyonu mevcut topluluktaki her kromozoma bir puan (uygunluk değeri) atar. Kromozomun uygunluğu, o kromozomun eldeki problemi ne kadar iyi çözdüğüyle ilişkilidir.</p>
<h5 id="213-genetik-operatorler">2.1.3. Genetik Operatörler</h5>
<p>GA'nın en basit formu üç tip genetik operatör içerir: seçilim, çaprazlama (çift noktalı) ve mutasyon.<br />Seçilim: Bu operatör topluluktaki kromozomları üreme için seçer. Kromozomun yüksek uygunluk değerine sahip olması, üremek için seçilmesi ihtimalini arttırır.<br />Çaprazlama: Bu operatör rastgele iki lopus belirleyip, iki kromozomun bu lopuslardan önceki ve sonraki kısımlarını aynı kalıcak şeklide, lopus aralarındaki kısımları değiştirilerek gerçekleştirilir.<br />Mutasyon: Bu operatör kromozomdaki bir veya daha fazla rastgele alınan bireyin yine rastgele seçilen gen ile genin değerine sahip diğer genin değişimi ile olur.</p>
<h4 id="22-genetik-algoritma-parametreleri">2.2. GENETİK ALGORİTMA PARAMETRELERİ</h4>
<h5 id="221-topluluk-buyuklugu">2.2.1. Topluluk Büyüklüğü</h5>
<p>Topluluk büyüklüğü için seçilen değer, algoritmanın performansını iki şekilde etkilemektedir. Birincisi, topluluk büyüklüğünün aşırı küçülmesi araştırma uzayının yetersiz örneklenmesine sebep olacağından<br />kontrollü ıraksamayı sağlamak zorlaşacak ve araştırma belirli bir alt optimal noktaya doğru sürüklenecektir. ikincisi, topluluk için aşırı yüksek değer seçildiğinde bir nesillik gelişim oldukça uzun süreye ihtiyaç duymaktadır.</p>
<h5 id="222-caprazlama-orani">2.2.2. Çaprazlama Oranı</h5>
<p>Bireylerin çoğalması sırasında kromozomlara uygulanacak çaprazlama operatörünün frekansını belirlemek amacıyla kullanılan parametredir. Düşük çaprazlama oranı yeni nesle çok az sayıda yeni yapının (ebeveyninden farklı yeni bireyler) girmesine sebep olmaktadır. Dolayısıyla tekrar üreme operatörü algoritmada aşırı etkili bir operatör haline gelmekte ve araştırmanın yakınsama hızı düşmektedir. Yüksek çaprazlama oranı araştırma uzayının çok hızlı bir şekilde araştırılmasına sebep olmaktadır. Ama oran aşırı yüksek ise, çaprazlama operatörü<br />benzer veya daha iyi yapıları üretmeden kuvvetli olan yapılar çok hızlı olarak bozulduğundan, algoritmanın performansı düşmektedir.</p>
<h5 id="223-mutasyon-orani">2.2.3. Mutasyon Oranı</h5>
<p>Mutasyon operasyonun frekansı, etkili bir genetik algoritma tasarlamak için çok iyi kontrol edilmelidir. Mutasyon operasyonu, araştırma sahasına yeni bölgelerin girmesini sağlar. Yüksek mutasyon oranı, araştırmaya aşırı bir rastgelelik kazandıracak ve araştırmayı çok hızlı olarak ıraksatacaktır. Başka bir deyişle, topluluğun gelişmesine değil tahribatına sebep olacaktır. Bu durumun tersine, çok düşük mutasyon oranının kullanılması ıraksamayı aşırı düşürecek ve araştırma uzayının tamamen araştırılmasını engelleyecektir. Dolayısıyla, algoritmanın alt optimal çözüm bulmasına sebep olacaktır.</p>
<h4 id="23-genetik-algoritma-sureci">2.3. GENETİK ALGORİTMA SÜRECİ</h4>
<p>Çözülmek üzere tanımlı bir problem verilmesi ve aday çözümlerin birer sayı dizisi ile temsil edilmesi halinde, GA şu şekilde çalışır:<br />1. N adet kromozom (problem için aday çözümler) içeren rastgele oluşturulmuş bir topluluk ile başla.<br />2. Topluluktaki her x kromozomu için f(x) uygunluk değerini hesapla.<br />3. Aşağıdaki adımları birey n (topluluk büyüklüğü) oluşturuluncaya kadar tekrarla.<br />3.a. Güncel topluluktan, yüksek uygunluk değerinin seçilme ihtimalini arttırdığını göz önünde bulundurarak, iki ebeveyn kromozom seç. Seçilim, aynı kromozomun birden çok defa ebeveyn olarak seçilmesine olanak verecek şekilde yapılır.<br />3.b. Pc olasılığı ("çaprazlama olasılığı" ya da "çaprazlama oranı") ile seçilen çifti iki yeni birey oluşturmak üzere rastgele belirlenen iki noktadan çaprazla. Eğer çaprazlama gerçekleşmezse ebeveynlerinin birebir kopyası olan iki çocuk oluştur. Pc için iki noktalı çaprazlama yöntemi seçilmiştir.<br />3.c. Pm olasılığı ("mutasyon olasılığı" ya da "mutasyon oranı") ile, oluşan bir veya birden fazla çocuğu tüm veya bazı lopuslarında mutasyona uğrat. Lopuslardaki genlerin değiştirilmesi ile de mutasyon gerçekleştirilir.<br />3.d. Sonuçta elde edilen kromozomları yeni topluluğa ekle.<br />4. Önceki topluluğu yeni topluluk ile değiştir. Böylece yeni nesil elde edilmiş oldu.<br />5. Sonlandırma koşulu sağlandıysa mevcut topluluktaki en iyi çözümü döndür, sağlanmadıysa 2. adıma dön. Sonlandırma koşulu çoğu zaman nesil sayısıdır. Bazı genetik algortimalarda bu koşul, belirlenen aralıkta uygunluk değerine sahip birey elde edilmesi olabilir.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611502676080/omQwAuq7D.png" alt="bb.png" /></p>
<h4 id="24-surecin-probleme-uygulanisi">2.4. SÜRECİN PROBLEME UYGULANIŞI</h4>
<h5 id="241-ilk-toplum-olusturulmasi-ve-kromozomlarin-kodlanmasi">2.4.1. İlk Toplum Oluşturulması ve Kromozomların Kodlanması</h5>
<p>Bir GA programının yazılması işleminde karşılaşılan ilk problemlerden biri kromozomların kodlanmasıdır. Kodlama işlemi problemin türüne göre değişmektedir. Bazen binary (ikili) kodlama en iyi çözüm bulmada kullanılırken, bazen de permutasyon veya değer kodlama yöntemleri optimum sonucu bulmada en uygun kodlama yöntemi olabilir. Permutasyon kodlama yöntemi, genel olarak gezgin satıcı ve şebeke tasarımları gibi sıra gözeten problemlerde yaygınca kullanılır.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611502710246/s_JmxNoMT.png" alt="bb2.png" /></p>
<p>Problemin çözümü için populasyon büyüklüğü karar verilmeli, populasyon büyüklüğü seçilirken aşırı yüksek seçilirse gelişim yavaşlar, aşırı küçük seçilmesi durumunda da araştırma uzayı yetersiz olucağından problemimize en uygun populasyon büyüklüğü 30 birey olarak seçilmiştir. Programın arayüzünde birey sayısı değiştirilebilir. Problemde permutasyon kodlama yöntemi seçilip, hatların taraflarındaki operasyon sayıları farklı olduğundan, hattın sol tarafı 10 operasyon dolayısıyla 10 gen ile, hattın sağ tarafı 17 operasyon yani 17 gen ile temsil edilmiştir. Her gendeki sayı değeri operasyon numarasına eşittir. Örnek olarak hattın sol tarafındaki 7 nolu kromozomun genleri: 4051296741038 şeklinde kodlanmıştır.</p>
<p>İlk toplum oluşturulurken tüm bireylerin genleri rastgele oluşturulmuştur. Problemde ilk toplumu oluşturan fonksiyon sadece ilk nesilde çalışır. Diğer nesillerde zaten eski ve yeni nesil değişimi gerçekleştiğinden bu fonksiyon kullanılmaz. Fonksiyon, birey sayısı kadar dönen döngüde her birey için gerekli gen sayısı kadar rastgele sayı üretir. Permutasyon kodlama ile yapılırken aynı gen değerine sahip bireylerin oluşması engellenmiştir.</p>
<h5 id="242-toplumdaki-her-kromozomun-uygunluk-degerinin-olculmesi">2.4.2. Toplumdaki Her Kromozomun Uygunluk Değerinin Ölçülmesi</h5>
<p>Popülasyon içerisinde yer alan ve her bir bireyi temsil eden kromozomların ne kadar iyi olduklarını bulan fonksiyona uygunluk fonksiyonu (uyumluluk, fitness) fonksiyonu denir. Bu GA programında özel çalışan tek kısımdır. Uygunluk fonksiyonu kromozomların şifrelerini çözer (decoding) ve sonra da hesaplama yaparak bu kromozomların uygunluklarını hesaplar.Elde edilen uygunluk değerlerine göre kromozomların yeni generasyon da olup olmayacaklarına karar verilir. Bütün popülasyonun (topluluğun) uygunluk değerleribelirlendiğinde sonuçlandırma kriterinin sağlanıp sağlanmadığına bakılır. Sonuçlandırma kriteri birkaç şekilde seçilebilmektedir. Bu seçeneklerden biri, istenilen nesil sayısına ulaşılması olabileceği gibi, tüm topluluğun uygunluk değerinin en iyi çözümün uyumluluk değerine oranının yeterli görülen bir değere ulaşması olabilir.</p>
<p>Uygunluk değerleri şöyle hesaplanır. Genler operasyonları temsil edip, sıra ile gruplandırılarak işçinin net çalışma süresi(260 dk.) ile işlemlerden geçirilip, kayıp zaman bulunarak hesaplanır. Buradaki gruplam için şart belli bir eşik değere göre yapılır. Eşik değeri sağlayan operasyonlar yani genler aynı grupta yer alarak o istasyona atanır. İşçinin çalışma süresi ile işlemler şunlardır. Eşik değeri sağlayan gen grubunun tüm süreleri toplanır. Bu toplam değer işçinin çalışma süresine göre modu alınır. Alınan mod ile elde edilen değer kalan değeridir. Toplam değer yine işçinin çalışma süresine göre normal bölme işleminden geçirilir. İşlem sonucunda elde edilen değer bölüm değerdir. Uygunluk değeri için ilk olarak işçinin çalışma süresi ile elde edilen kalan değer çıkartılır. Bu değer, elde edilen bölüm süresinin bir fazlası ile çarpılarak uygunluk değeri hesaplanır. İşlemlerde şu ayrıntı önemlidir. Eşik değeri geçememe ihtimali olursa tüm operasyonlar tek istasyonda toplanır. Bu istenmeyen durum için kayıp zaman, tüm operasyonların toplamı ile elde edilir. Böylece problemde bu adımda tekli gruplamanın uygunluk değeri yüksek olduğundan elenme ihtimali çok yükselir. Uygunluk fonksiyonunda eşik değer istenilirse her iki hattın tarafı için ayrı değerlere sahip olabilir. Fonksiyon sonrasında tüm bireylerin uygunluk değerleri bir dizide toplanır. Dizilerin birbirleriyle indexleri sayesinde anlaşırlar.</p>
<h4 id="243-yeni-toplum-olustur">2.4.3. YENİ TOPLUM OLUŞTUR</h4>
<h5 id="2431-elitizm-yapilmasi">2.4.3.1. Elitizm Yapılması</h5>
<p>Elitizm, performansı en iyi bireyle (bireyler) temsil edilen o ana kadar ki en iyi çözümün korunarak, değiştirilmeksizin bir sonraki nesle aktarılmasını sağlar. Toplumda tüm bireyler çaprazlanarak oluşturulursa, iyi olan bireylerin yok olma veya çözümden uzaklaşma ihtimali ortaya çıkar. Bu durumu ortadan kaldırmak için toplum içindeki en iyi 2 tane birey direk yeni topluma eklenerek elitizm yapıldı. Elitizm yapıldığı fonksiyonda en iyi uygunluk değerine sahip bireylerin seçilmesi dışında tüm bireyler uygunluk değerlerine göre sıralandı. Böylece çarpazlama için seçilen bireylerin kıyaslama işlemleri daha kolay gerçekleştirildi.</p>
<h5 id="2432-secilim-ve-caprazlama">2.4.3.2. Seçilim ve Çaprazlama</h5>
<p>En iyi kromozomları seçmek için birçok yöntem vardır. Bu seçim yöntemlerinden bazıları; Rulet tekerleği seçimi (roulette wheel selection), Boltzman seçimi (Boltzman selection), turnuva seçimi (tournament selection), sıralama seçimi (rank selection) ve sabit durum seçimidir. Seçilimde ikili turnuva seleksiyonu kullanıldı. İkili turnuva seçiminde populasyon içinden rastgele iki birey seçilir ve uygunluklarına göre iyi olan alınır, daha sonra tekrar rastgele iki birey seçilir ve yine uygunluklarına göre en iyi olan alınır. Böylece finale kalan iki tane birey çaprazlama işlemi için seçilmiş olur.</p>
<p>Genel olarak çaprazlama işlemi; kromozomların nasıl kodlanacağı belirlendikten sonra popülasyondaki kromozomlardan uygunluk değeri yüksek olanlar içerisinden rasgele seçilen iki birey arasında, yine rasgele seçilen bir noktaya göre karşılıklı gen alış-verişi olarak tanımlanır. Çaprazlama işlemi, uygunluk değeri iyi olan ve rasgele seçilen iki kromozomdan daha iyi uygunluk değerine sahip, yeni kromozomların elde edilmesi amacıyla yapılmaktadır. Yeni neslin uygunluk değerlerinin iyi çıkması ve problemin optimum çözüme ulaşabilmesi açısından çaprazlama işlemi son derece önem taşımaktadır. Bu aşamada, faklı kromozomlar alınarak çaprazlama işlemi yapıldığı için değişik özelliklerin test edilmesine ve hızlı olarak yeniden oluşmasına olanak sağlar. Bu işlem neticesinde ise, iki bireyin kopyası durumundaki kopyalanmış bireyler ile çeşitlilik sağlanmış ve en iyi çözüme yaklaşım sağlanmıştır. Çaprazlama için uygulamada iki noktalı çaprazlama seçilmiştir. Bu tür çaprazlama işleminde iki tane kırılma noktası alınır. İlk noktaya kadar olan bitler birinci kromozomdan, iki nokta arasındaki bitler ikinci kromozomdan, kalanlar ise tekrar birinci kromozomdan yeni bireye kopyalanırlar.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611502741694/4mwNQq_Mj.png" alt="bb31.png" />
Uygulamada seçilen ikili turnuva metodunu uygulamak için toplumda rastgele 2 şer birey alınıp birbirleriyle kıyaslanır. Bu kıyaslama işlemini, daha önce elitizm fonksiyonunda yapılmış olan bireylerin uygunluklarına göre sıralanmış olması kolaylaştırır. Çünkü rastgele elde edilen sayıların, sayı değerlerini kıyaslamak yeterli olacaktır. Çaprazlama için seçilen iki bireyler iki noktalı yöntem ile çaprazlanıp yeni topluma aktarılır.</p>
<h5 id="2433-mutasyon">2.4.3.3. Mutasyon</h5>
<p>GA programının sonuca ulaşmasında önemli olan başka bir etkende mutasyon işlemidir. Programın belirli bir alana yoğunlaşarak sonuca ulaşmayı engelleyen bazı durumlarda, problemin daha geniş bir alanda arama yapabilmesi için mutasyon işlemine baş vurulur. Böylelikle popülasyondaki çözümlerin yerel optimuma düşmesi engellenmiş olur. Uygulamada çaprazlamaya uğrayan bireylerden biri mutasyona uğratıldı. Mutasyon için rastgele bir birey seçilir ve yine rastgele seçilen gen ile genin değerine sahip olan diğer gen yerdeğiştirilerek gerçekleştirilir.</p>
<h4 id="244-onceki-toplum-ile-yeni-toplumu-degistir">2.4.4. ÖNCEKİ TOPLUM İLE YENİ TOPLUMU DEĞİŞTİR</h4>
<p>Önceki topluluğu yeni topluluk ile değiştirilir. Böylece yeni nesil elde edilmiş olur.</p>
<h4 id="245-sonlandirma-kosulu-kontrolu">2.4.5. SONLANDIRMA KOŞULU KONTROLU</h4>
<p>Problemin çözümü için sonlandırma koşulu belirelenen iterasyon sayısı kabul edilir. Döngü sonlanınca problemin en uygun çözümü elde edilmiş olur.</p>
<h3 id="3-performans-analizi">3. PERFORMANS ANALİZİ</h3>
<p>Çalışmada, çift taraflı mondaj hattının her iki tarafı için belirlenmiş çevrim süresi içerisinde istasyon sayısını en aza indirgeme hedeflenmiştir. Problemde her hattın belirli bir operasyon süresi vardır. Bu operasyonların bir araya geldiği istasyonlarda işlemleri gerçekleştirmek için operasyonların toplamı kadar süre harcanır. Bu süre bir işçinin çalışma süresinin tam katları şeklinde yerleştirilmelidir. Aksi durumda operasyonarın bitiminde işçinin bekleme yani kayıp süre oluşur. Bu uygulamada da amaç genetik algoritma ile bu süreyi olabildiğince en aza indirmek hedeflenmiştir.</p>
<h4 id="31-operasyonlar-ve-cevrim-sureleri">3.1. OPERASYONLAR VE ÇEVRİM SÜRELERİ</h4>
<p>Problemin tanımlanması için her istasyon için değişmez ve ayrı olan operasyon numaraları ile bu operasyonlara ait süreler tablo şeklinde verilmiştir.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611502788085/qbfrwu4pf.png" alt="bb4.PNG" /></p>
<h4 id="32-problemin-varolan-durumu">3.2. PROBLEMİN VAROLAN DURUMU</h4>
<p>İstasyonlar için şuan ki durum :</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611502828238/Gz18loAl5.png" alt="bb4 (1).png" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611502847463/QUSc8MkJP.png" alt="bb5.PNG" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611502906675/KF13Yn0ot.png" alt="a3.png" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611502913608/UPCJb6_dv.png" alt="a6.png" /></p>
]]></content:encoded></item></channel></rss>