House Robber Algorithm

In this post i will explain an interesting algorithm commonly called as House Robber Algorthim.Suppose a robber is planning to steal money from a group of houses.Amount of money each house is having is painted
on each house.But condition is he should not steal from adjacent house means , if he steals from nth house, he should
not steal from (n-1)th and (n+1)th house.How can he steal the maximum amount following the above condition.

We can solve this easily by using dynamic programming with recurssion.

Suppose we have houses in this order : House= 1, 2 , 3, 4 …. n
and A(House) is the amount in the House.

If we select a house with highest amount and start from there max(A(1),A(2),A(3)….) ,chances are we end with the less amount.
Hence for finding the best solution , we maintain 2 meta data at each house .Including(I) and Excluding(E)
Assume n=1,2,3… amount at each house,

I=n+ E(n-1)
E=max[I(n-1),E(n-1)]

After doing a recursive calculation at each house. Then the house with max(I,E) is the house to start the robbery.

Let me explain it more clearly by giving an example.

House robber Algorithm

Algorithms to find missing number among 1 to n numbers

I was trying to find a missing number in 1 to n, which are stored in an array in any order. where n is a very big number. I came up with 3 approaches and their time complexities.

  1. sort the list first using one of the sort algorithm and assume its time complexity is o(n) to sort it . then traverse the whole array and check if n+1 element is present for the nth element in the array. If not present then is the missing number. For traversing the time complexity can be o(n).So finally o(n)+o(n) =o(n)
  2. Second approach i thought of is  finding the sum of 1 to n numbers using n*n(+1)/2 .then find the sum of the given numbers. Assuming the traversing complexity while doing the sum is o(n).Difference of both the sums gives the missing number.So finally it will be o(n)+o(n)=o(n). This is good if n is a small number , what it is some very big number . then sum will be very big which the data type in some languages cant hold.
  3. I assume this is the best approach compared to above, we can use the XOR binary operation. we know (n XOR n) gives zero.like 1 XOR 1 =0,2 XOR 2 =0. So iterate the numbers and do the XOR operation of (1 to n ) and ( 1 to n with missing no). which gives the missing number .For iterating assume time complexity is o(n).

Google App Engine Jsp Issue

Unlike Servlets, Jsp’s when used for a google App Engine Project in Eclipse will show errors like below.

app engine jsp error

 

 

Simple reason being Eclipse uses default JRE which is not used by the .jsp files
I will give a simple solution for this.

SOLUTION:

1.goto Windows -> Preferences -> Java -> Installed JREs
2.Click Add-> Standard VM ->Next
3.Click Directory and select the default jdk installed directory

 

How to run HADOOP Map Reduce Program in UBUNTU ?

Many Beginners asked me this basic question – “How to run HADOOP Map Reduce Program in UBUNTU ?”

I can understand the difficulty beginners face using HADOOP in UBUNTU
For them im going to answer it with a brief explanation,
Generally after installation,we would like to test whether we have successfully installed hadoop or not.
First thing we need to do is,check whether the expected Hadoop Processes are running or not using a tool “jps” present in

hadoop directory.

hduser@ubuntu:/hadoop$ jps
2287 TaskTracker
2149 JobTracker
1938 DataNode
2085 SecondaryNameNode
2349 Jps
1788 NameNode

Some times while running this tool ,it may raise an error that tool jps is not present and other jdk version need to be

installed. In that case use the netstat command to check if Hadoop is listening on the configured ports.

hduser@ubuntu:~$ sudo netstat -plten | grep java
tcp   0  0 0.0.0.0:50070   0.0.0.0:*  LISTEN  1001  9236  2471/java
tcp   0  0 0.0.0.0:50010   0.0.0.0:*  LISTEN  1001  9998  2628/java
tcp   0  0 0.0.0.0:48159   0.0.0.0:*  LISTEN  1001  8496  2628/java
tcp   0  0 0.0.0.0:53121   0.0.0.0:*  LISTEN  1001  9228  2857/java
tcp   0  0 127.0.0.1:54310 0.0.0.0:*  LISTEN  1001  8143  2471/java
tcp   0  0 127.0.0.1:54311 0.0.0.0:*  LISTEN  1001  9230  2857/java
tcp   0  0 0.0.0.0:59305   0.0.0.0:*  LISTEN  1001  8141  2471/java
tcp   0  0 0.0.0.0:50060   0.0.0.0:*  LISTEN  1001  9857  3005/java
tcp   0  0 0.0.0.0:49900   0.0.0.0:*  LISTEN  1001  9037  2785/java
tcp   0  0 0.0.0.0:50030   0.0.0.0:*  LISTEN  1001  9773  2857/java

Yes they are listening ,next step is to run the sample program :

hduser@ubuntu:/home/hemanth/hadoop$ bin/hadoop jar hadoop-examples-1.0.4.jar pi 4 100

you will get the output.

Now another scenario is to run the Map Reduce Program ,which we have written .

1. Save the MapReduce Program with same like WordCount.java and save it in folder named wordcount_classes in
hadoop folder
2. Next execute the following commands in order(dont forget to create the input and ouput directories)

$ javac -classpath /home/hemanth/hadoop/hadoop-core-1.0.4.jar -d /home/hemanth/hadoop/wordcount_classes/WordCount.java

root@ubuntu:/home/hemanth/hadoop# jar -cvf wordcount.jar -C wordcount_classes/ .

hduser@ubuntu:/home/hemanth/hadoop$ bin/hadoop jar/home/hemanth/hadoop/wordcount.jar WordCount -r 2

/user/hduser/gutenberg/user/hduser/gutenberg-output2

Now check the output.thats all.
Fell free to ask me any queries regarding this . Tq 🙂

BIG QUERY: Analytics goooooooooogles way

I’ve been wondering how i forgot to write an article on Big Query. An year back when i heard the word “Big Query” from Google i felt these guys are planning to conquer the BIG DATA world as well. Its obvious because google showed the world GFS(google file system) and Mapreduce concepts,which gave birth to hadoop. HAIL GOOGLE.!!! for your innovation.
Coming back to GOOGLE BIG QUERY,it is a full fledge big data tool stored on the cloud.Google created this tool online where you can analyze your bigdata for a per use fee, similar to other cloud offerings.

Wanna practially see the Advantage of BIG Query ???
Yes ,you can see the demo based on 2 contexts WIKIPEDIA & Data from WEATHER STATIONS.
Try the Demo here

https://demobigquery.appspot.com

bigquerydemo

Continue reading

BIG DATA + ORACLE + HADOOP

This is my Idea and explanation to one of my colleague who asked me HOW TO USE ORACLE TECHNOLOGY WITH BIG DATA.

As you can see below, we can combine the open source technology HADOOP with Oracle Technologies.  I hope you can easily understand it 🙂

bigdata

Map Reduce Program Error in Ubuntu

Some times while Running a Map Reduce Program may result in this kind of Error ,the reason is output directory exits already ,which means you have already run it some time before .

hduser@ubuntu:/home/hemanth/hadoop$ bin/hadoop jar hadoop-examples-1.0.4.jar pi 10 100
Warning: $HADOOP_HOME is deprecated.
Number of Maps = 10
Samples per Map = 100
java.io.IOException: Tmp directory
hdfs://localhost:54310/user/hduser/PiEstimator_TMP_3_141592654 already exists.
Please remove it first.
at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:270)
at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:2
5)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:6
8)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:2
5)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

I resolved this issue like this :

hduser@ubuntu:/home/hemanth/hadoop$ cd bin
hduser@ubuntu:/home/hemanth/hadoop/bin$ fs -rmr
hdfs://localhost:54310/user/hduser/PiEstimator_TMP_3_141592654
Warning: $HADOOP_HOME is deprecated.
Deleted hdfs://localhost:54310/user/hduser/PiEstimator_TMP_3_141592654

Now you run this Program ,you will get the out put

SOA Composer Not Opening Up in Oracle SOA 11gR1 PS3 (11.1.1.4.0)

Problem : After installing Oracle SOA 11g if you try opening SOA Composer (http:// host:7001/soa/composer) to edit Rules, DVM or tasks you might encounter below issue :-

“Error–404 Not Found”

 

Solution: While installing SOA, to optimize the server memory usage, certain applications does not get targeted and hence they don’t get deployed (including soa composer) by default when a server starts.

1) Login to Weblogic Admin console and go to “Deployments” section
2) Click composer
3) Go to “Targets” tab, select all components and click on “Change Targets”
4) Select “AdminServer” in “Target Deployments” screen and click on “Yes”
5) Make sure that “Current Target”section, now shows “AdminServer” for all components and you see the success message on Weblogic Admin Console
6) Now, try to open SOA Componser web-console in a web-browser and it should get opened

Version number of Deployed Project

When deployed the Oracle SOA Project with 2 different version numbers(eg 1.0 & 2.0) . it will append the version number
for the old version( ie 1.0)

if the project name is Transactions ,and when we check the WSDL in the em for that project ,its like this :
Transactions[1.0] –>
http://01HW342887-VM2.India.com:8001/soa-infra/services/Check/Adding!1.0*soa_ebfd70ad-f584-422f-b34f-5033e40b56de/bpeladding_client_ep?WSDL

Transactions[2.0] –>
http://01HW342887-VM2.India.com:8001/soa-infra/services/Check/Adding/bpeladding_client_ep?WSDL

for the older version ,the version number is appended in the wsdl

BAM:could not reserve enough space for object heap

While starting the BAM server like this (startManagedWebLogic.cmd bam_server1) in the command prompt,sometimes you may get the following error,
could not reserve enough space for object heap windows.

Steps to resolve this error:

1. Goto setSOADoainEnv.cmd ,located at user_projects\domains\domain1\bin\ folder
2. Then edit the file by changing
set DEFAULT_MEM_ARGS=-Xms512m -Xmx1024m
to
set DEFAULT_MEM_ARGS=-Xms256m -Xmx512m
3. Save it and restart the servers again.